Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by master's degree program "Magisterprogrammet i matematik och statistik"

Sort by: Order: Results:

  • Koivurova, Antti (2021)
    This thesis surveys the vast landscape of uncertainty principles of the Fourier transform. The research of these uncertainty principles began in the mid 1920’s following a seminal lecture by Wiener, where he first gave the remark that condenses the idea of uncertainty principles: "A function and its Fourier transform cannot be simultaneously arbitrarily small". In this thesis we examine some of the most remarkable classical results where different interpretations of smallness is applied. Also more modern results and links to active fields of research are presented.We make great effort to give an extensive list of references to build a good broad understanding of the subject matter.Chapter 2 gives the reader a sufficient basic theory to understand the contents of this thesis. First we talk about Hilbert spaces and the Fourier transform. Since they are very central concepts in this thesis, we try to make sure that the reader can get a proper understanding of these subjects from our description of them. Next, we study Sobolev spaces and especially the regularity properties of Sobolev functions. After briefly looking at tempered distributions we conclude the chapter by presenting the most famous of all uncertainty principles, Heisenberg’s uncertainty principle.In chapter 3 we examine how the rate of decay of a function affects the rate of decay of its Fourier transform. This is the most historically significant form of the uncertainty principle and therefore many classical results are presented, most importantly the ones by Hardy and Beurling. In 2012 Hedenmalm gave a beautiful new proof to the result of Beurling. We present the proof after which we briefly talk about the Gaussian function and how it acts as the extremal case of many of the mentioned results.In chapter 4 we study how the support of a function affects the support and regularity of its Fourier transform. The magnificent result by Benedicks and the results following it work as the focal point of this chapter but we also briefly talk about the Gap problem, a classical problem with recent developments.Chapter 5 links density based uncertainty principle to Fourier quasicrystals, a very active field of re-search. We follow the unpublished work of Kulikov-Nazarov-Sodin where first an uncertainty principle is given, after which a formula for generating Fourier quasicrystals, where a density condition from the uncertainty principle is used, is proved. We end by comparing this formula to other recent formulas generating quasicrystals.
  • Joutsela, Aili (2023)
    In my mathematics master's thesis we dive into the wave equation and its inverse problem and try to solve it with neural networks we create in Python. There are different types of artificial neural networks. The basic structure is that there are several layers and each layer contains neurons. The input goes to all the neurons in the first layer, the neurons do calculations and send the output to all the neurons in the next layer. In this way, the input data goes through all the neurons and changes and the last layer outputs this changed data. In our code we use operator recurrent neural network. The biggest difference between the standard neural network and the operator recurrent neural network is, that instead of matrix-vector multiplications we use matrix-matrix multiplications in the neurons. We teach the neural networks for a certain number of times with training data and then we check how well they learned with test data. It is up to us how long and how far we teach the networks. Easy criterion would be when a neural network has learned the inversion completely, but it takes a lot of time and might never happen. So we settle for a situation when the error, the difference between the actual inverse and the inverse calculated by the neural network, is as small as we wanted. We start the coding by studying the matrix inversion. The idea is to teach the neural networks to do the inversion of a given 2-by-2 real valued matrix. First we deal with networks that don't have the activation function ReLU in their layers. We seek a learning rate, a small constant, that speeds up the learning of a neural network the most. After this we start comparing networks that don't have ReLU layers to networks that do have ReLU layers. The hypothesis is that ReLU assists neural networks to learn quicker. After this we study the one-dimensional wave equation and we calculate its general form of solution. The inverse problem of the wave equation is to recover wave speed c(x) when we have boundary terms. Inverse problems in general do not often have a unique solution, but in real life if we have measured data and some additional a priori information, it is possible to find a unique solution. In our case we do know that the inverse problem of the wave equation has a unique solution. When coding the inverse problem of the wave equation we use the same approach as with the matrix inversion. First we seek the best learning rate and then start to compare neural networks with and without ReLU layers. The hypothesis once again is that ReLU supports the learning of the neural networks. This turns out to be true and happens more clearly with wave equation than with matrix inversion. All the teaching was run on one computer. There is a chance to get even better results if a more powerful computer is used.
  • Paavonen, Aleksi (2024)
    The ever-changing world of e-commerce prompted the case company to develop a new improved online store for its business functions, which prompted the need to also understand relevant metrics. The aim of the research is to find the customer behaviour metrics that have explanatory power for the response variable, which is the count of transactions. Examining these key metrics provide an opportunity to create a sustainable foundation for future analytics. Based on the results the case company can develop analytics, as well as understand the weaknesses and strengths of the online store. The data is from Google Analytics service and each variable receives a daily value, but the data is not treated as time series. The response variable is not normally distributed, so a linear model was not suitable. Instead, the natural choice was generalized linear models as they can also accommodate non-normally distributed response variables. Two different models were fitted, Poisson distributed, and Gamma distributed. The models were compared in many ways, but no clear difference between the models performance was found, so the results were combined from both models. The results provided by the models were quite similar, but there were differences. For this reason, the explanatory variables were divided into three categories: key variables, variables with differing results, and non-significant variables. Key variables have explanatory power for the response variable, and the results of the models were consistent. For variables with differing results, the results of the models were different, and for non-significant variables, there was no explanatory power for the response variable. This categorization facilitates understanding of the results. In total 6 explanatory variables were categorized as key variables, one as mixed result variable and two as non-significant. In conclusion it matters which variables are tracked if the efficiency of the web store is developed based on the efficiency of transactions.
  • Stowe, William (2023)
    Is it possible to color R^2 with 2 colors in such a way that the vertices of any unit equilateral triangle are not all the same color. This thesis seeks to answer questions of this kind in the field of Euclidean Ramsey Theory. We begin by defining that a finite configuration A is k-Ramsey in R^n if any k-coloring of R^n has a monochromatic set that is congruent to A. We both prove and disprove this property for various configurations, dimensions, and numbers of colors. This includes a discussion of the problem of finding the chromatic number of the plane, and the connection of k-Ramsey problems to immersion of unit distance graphs. We then attempt to generalize this property to different equivalence relations other than congruence and study how this affects which configurations are guaranteed monochromatic. Following from the Hales-Jewett Theorem, this line of inquiry peaks with a discussion of Gallai’s Theorem, which says that translation and scaling form a sufficient set of group actions to guarantee all configurations k-Ramsey for any k, in any dimension. We then turn our attention to the property of Ramsey-ness. A configuration A is said to be Ramsey if for any number of colors k, there exists a dimension n such that A is k-Ramsey in R^n . We show that if a configuration is Ramsey, then it must be embeddable in the surface of a sphere of some dimension. Further, we show that any brick, the Cartesian product of intervals, is Ramsey, and thus any subset of a brick is Ramsey. Finally, we prove that any triangle configuration is Ramsey.
  • Tarpila, Lauri (2024)
    Paperin käyttö on ollut pitkään merkittävä osa Suomen taloutta ja useiden maailman tavaroiden raaka-aine. Nykyinen halu ottaa vahvemmin huomioon ihmisoikeudet ja maailman kantokyky tehtaiden tuotannossa on saanut Valmetin pohtimaan, että minkälaisia olisi tulevaisuuden paperikoneet. Tässä tutkimuksessa katsottiin nykyisen tutkimustiedon valossa, että onko olemassa tekijöitä, jotka voisivat olla yhteydessä koneiden tai paperikoneiden energiatehokkuuteen. Aikaisemman tutkimuksen ja Valmetin asiantuntijuuden tarjoamaa tietoa yritettiin mallintaa tilastollisella mallilla käyttäen kriteerinä ensisijaisesti johdonmukaisuutta ja yksinkertaisuutta. Saatu malli haluttiin kuvaavan johdonmukaisesti ja yksinkertaisesti saadun tiedon mukaisia yhteyksiä energiatehokkuuteen. Tämän jälkeen Valmetin tarjoamaa aineistoa kaikista paperikoneista tarkasteltiin ja saadun mallin yhteyksien voimakkuudet mitattiin tällä aineistolla. Näin saaduista tuloksista voidaan nähdä, että onko yhteydet tarpeeksi suuria, että olisi käytännöllistä yrittää rakentaa mallin mukaan energiatehokkaampia paperikoneita. Keskeisin tulos on, että yhteydet ovat heikkoja ja huomattavasti ongelmia liittyy myös menetelmien käyttöön ja oletusten voimassaoloon. Nopeampien koneiden rakentaminen saattaa parantaa koneiden energiatehokkuutta, mutta muiden koneiden ominaisuuksien muuttaminen energiatehokkuuden kannalta ei välttämättä olisi kannattavaa. Ihmisoikeuksien puolesta tehtaiden koulutustasoon, tulotasoon tai vapauteen vaikuttaminen ei välttämättä tuottaisi energiatehokkuudessa muutosta. Helpoin tapa voisi olla energianhinnan keinotekoinen nostaminen, joka motivoisi tehtaita käyttämään vähemmän energiaa ja olemaan siten energiatehokkaampia. Suurimmat ongelmat liittyvät aiheen vähäiseen aikaisempaan tutkimiseen ja tämän työn pieneen laajuuteen. Olemassa on muitakin paperikoneisiin liittyviä aineistoja, joita tutkimalla voisi saada selvennystä joihinkin aiheisiin. Aiheen ymmärryksen vähyys voi aiheuttaa suuria tai pieniä virheitä tuloksiin, joten tutkimusta pitäisi lähtökohtaisesti pitää epäluotettavana.
  • Nyman, Valtteri (2022)
    Tässä tutkielmassa esitellään lyhyesti PCP-teoreema, minkä jälkeen tutkielmassa pala palalta käydään läpi teoreeman todistamiseen tarvittavia työkaluja. Tutkielman lopussa todistetaan PCP-teoreema. Vaativuusluokka PCP[O(log n), O(1)] sisältää ne ongelmat, joilla on olemassa todistus, josta vakio määrän bittejä lukien probabilistinen Turingin kone kykenee ratkaisemaan ongelman käyttäen samalla vain logaritmisen määrän satunnaisuutta suhteessa syötteen kokoon. PCP-teoreema väittää vaativuusluokan NP kuuluvan vaativuusluokkaan PCP[O(log n), O(1)]. Väritys on funktio, joka yhdistää kuhunkin joukon muuttujaan jonkin symbolin. Rajoite joillekin muuttujille on lista symboleista, joita rajoite sallii asetettavan muuttujille. Jos väritys asettaa muuttujille vain rajoitteen sallimia symboleja, rajoite on tyytyväinen väritykseen. Optimointi-ongelmat koskevat sellaisten väritysten etsimistä, että mahdollisimman moni rajoite joukosta rajoitteita on tyytyväinen väritykseen. PCP-teoreemalla on yhteys optimointi-ongelmiin, ja tätä yhteyttä hyödyntäen tutkielmassa todistetaan PCP-teoreema. Tutkielma seuraa I. Dinurin vastaavaa todistusta vuoden 2007 artikkelista The PCP Theorem by Gap Amplification. Rajoiteverkko on verkko, jonka kuhunkin kaareen liittyy jokin rajoite. Rajoiteverkkoon liittyy lisäksi aakkosto, joka sisältää ne symbolit, joita voi esiintyä verkon rajoitteissa ja värityksissä. Tutkielman päälauseen avulla kyetään kasvattamaan rajoiteverkossa olevien värityksiin tyytymättömien rajoitteiden suhteellista osuutta. Päälause takaa, että verkon koko säilyy samassa kokoluokassa, ja että verkon aakkoston koko ei muutu. Lisäksi jos verkon kaikki rajoitteet ovat tyytyväisiä johonkin väritykseen, päälauseen tuottaman verkon kaikki rajoitteet ovat edelleen tyytyväisiä johonkin väritykseen. Päälause koostetaan kolmessa vaiheessa, joita kutakin vastaa tutkielmassa yksi osio. Näistä ensimmäisessä, tutkielman osiossa 4, verkon rakenteesta muovataan sovelias seuraavia osioita varten. Toisessa vaiheessa, jota vastaa osio 6, verkon kävelyitä hyödyntäen kasvatetaan tyytymättömien rajoitteiden lukumäärää, mutta samalla verkon aakkosto kasvaa. Kolmannessa vaiheessa, osiossa 5, aakkoston koko saadaan pudotettua kolmeen sopivan algoritmin avulla. Osiossa 7 kootaan päälause ja todistetaan lausetta toistaen PCP-teoreema.
  • Sarkkinen, Miika (2023)
    In this thesis we present and prove Roger Penrose’s singularity theorem, which is a fundamental result in mathematical general relativity. In 1965 Penrose showed that in Einstein’s theory of general relativity, under certain general assumptions on the topology, curvature, and causal structure of a Lorentzian spacetime manifold, the spacetime manifold is null geodesically incomplete. At the time, Penrose’s theorem was highly topical in a longstanding debate on the question whether singularities are formed in the process of gravitational collapse. In the proof of the theorem, novel mathematical techniques were introduced in the study of Einstein’s theory of gravity, leading to further important developments in the mathematics of general relativity. Penrose’s theorem is built on the methods of semi-Riemannian geometry, in particular Lorentzian geometry. To lay the basis for later constructions, we therefore review the basic concepts and results of semi-Riemannian geometry needed in order to understand Penrose’s theorem. The discussion includes semi-Riemannian metrics, connection, curvature, geodesics, and semi-Riemannian submanifolds. Second, calculus of variations on semi-Riemannian manifolds is introduced and a set of results pertinent to Penrose’s theorem is given. The notion of focal point of a spacelike submanifold is defined and a proposition stating sufficient conditions for the existence of focal points is presented. Furthermore, we give a series of results that establish a relation between focal points of spacelike submanifolds and causality on a Lorentzian manifold. In the last chapter, we define a family of concepts that can be used to analyze the causal structure of Lorentzian manifolds. In particular, we define the notions of global hyperbolicity, Cauchy hypersurface, and trapped surface, which are central to Penrose’s theorem, and show some important properties thereof. Finally, Penrose’s theorem is stated and proved in detail.
  • Immonen, Johanna (2024)
    This thesis considers crossing probabilities in 2D critical percolation, and modular forms. In particular, I give an exposition on the theory on modular forms, percolation theory and complex analysis that is needed to characterise the crossing probabilities by means of modular forms. These results are not mine, but I review them and present full proofs which are omitted in the literature. In the special case of 2 dimensions, the percolation theory admits a lot of symmetries due to its conformal invariance at the criticality. This makes its study especially fruitful. There are various types of percolation, but let us consider for example a critical bond percolation on a square lattice. Mark each edge in the lattice black (open) or white (closed) with equal probability, and each edge independently. The probability that there is cluster of connected black edges which is attached to both left and right side of the rectangle, is the horizontal crossing probability. Note that there is always either such a black cluster connecting the left and right sides or a white cluster connecting the upper and lower sides of the rectangle in the dual lattice. This gives us a further symmetry. The crossing probability at the scaling limit, where the mesh size of the square lattice goes to zero, is given by Cardy-Smirnov’s formula. This formula was first derived unrigorously by Cardy, but in 2001 it was proved by Smirnov in the case of a triangular site percolation. I present an alternative expression for the Cardy-Smirnov’s formula in terms of modular forms. In particular, I show that Cardy-Smirnov’s formula can be written as an integral of Dedekind’s eta function restricted to the positive imaginary axis. For this, one needs first that the conformal cross ratio for a rectangle corresponds to the values of the modular lambda function at the positive imaginary axis. This follows by using Schwartz reflection to the conformal map from the rectangle to the upper half plane given by Riemann mapping theorem, and finding an explicit expression for the construction using Weierstrass elliptic function. Using the change of basis for the period module and uniqueness of analytic extension, it follows that the analytic extension for the conformal cross ratio is invariant with respect to the congruent subgroup of the modular group of level 2, and is indeed the modular lambda function. Now, one may reformulate the hypergeometric differential equation satisfied by Cardy-Smirnov’s formula as a function on lambda. Using the symmetries of lambda function, one can deduce the relation to Dedekind’s eta. Lastly, I show how Cardy-Smirnov’s formula is uniquely characterised by two assumptions that are related to modular transformation. The first assumption arises from the symmetry of the problem, but there is not yet physical argument for the second.
  • Laarne, Petri (2021)
    The nonlinear Schrödinger equation is a partial differential equation with applications in optics and plasma physics. It models the propagation of waves in presence of dispersion. In this thesis, we will present the solution theory of the equation on a circle, following Jean Bourgain’s work in the 1990s. The same techniques can be applied in higher dimensions and with other similar equations. The NLS equation can be solved in the general framework of evolution equations using a fixed-point method. This method yields well-posedness and growth bounds both in the usual L^2 space and certain fractional-order Sobolev spaces. The difficult part is achieving good enough bounds on the nonlinear term. These so-called Strichartz estimates involve precise Fourier analysis in the form of dyadic decompositions and multiplier estimates. Before delving into the solution theory, we will present the required analytical tools, chiefly related to the Fourier transform. This chapter also describes the complete solution theory of the linear equation and illustrates differences between unbounded and periodic domains. Additionally, we develop an invariant measure for the equation. Invariant measures are relevant in statistical physics as they lead to useful averaging properties. We prove that the Gibbs measure related to the equation is invariant. This measure is based on a Gaussian measure on the relevant function space, the construction and properties of which we briefly explain.
  • Karvonen, Elli (2021)
    The topological data analysis studies the shape of a space at multiple scales. Its main tool is persistent homology, which is based on other homology theory, usually simplicial homology. Simplicial homology applies to finite data in real space, and thus it is mainly used in applications. This thesis aims to introduce the theories behind persistent homology and its application, image completion algorithm. Persistent homology is motivated by the question of which scale is the most essential to study data shape. A filtration contains all scales we want to explore, and thus it is an essential tool of persistent homology. The thesis focuses on forming a filtaration from a Delaunay triangulation and its subcomplexes, alpha-complexes. We will found that these provide sufficient tools to consider homology classes birth and deaths, but they are not particularly easy to use in practice. This observation motivates to define a regional complement of the dual alpha graph. We found that its components' and essential homology classes' birth and death times correspond. The algorithm utilize this observation to complete images. The results are good and mainly as could be expected. We discuss that algorithm has potential since it does need any training or other input parameters than data. However, future studies are needed to imply it, for example, in three-dimensional data.
  • Moilanen, Eero (2022)
    In the thesis ”P-Fredholmness of Band-dominated Operators, and its Equivalence to Invertibility of Limit Operators and the Uniform Boundedness of Their Inverses”, we present the generalization of the classical Fredholm-Riesz theory with respect to a sequence of approximating projections on direct sums of spaces. The thesis is a progessive introduction to understanding and proving the core result in the generalized Fredholm-Riesz theory, which is stated in the title. The stated equivalence has been further improved and it can be generalized further by omitting either the initial condition of richness of the operator or the uniform boundedness criterion. Our focal point is on the elementary form of this result. We lay the groundwork for the classical Fredholm-Riesz theory by introducing compact operators and defining Fredholmness as invertibility on modulo compact operators. Thereafter we introduce the concept of approximating projections in infinite direct sums of Banach spaces, that is we operate continuous operators with a sequence of projections which approach the identity operator in the limit and examine whether we have convergence in the norm sense. This method yields us a way to define P-compactness, P-strong converngence and finally PFredholmness. We introduce the notion of limit operators operators by first shifting, then operating and then shifting back an operator with respect to an element in a sequence and afterwards investigating what happens in the P-strong limit of this sequence. Furthermore we define band-dominated operators as uniform limits of linear combinations of simple multiplication and shift operators. In this subspace of operators we prove that indeed for rich operators the core result holds true.
  • Apell, Kasperi (2023)
    Let L_N denote the maximal number of points in a rate 1 Poisson process on the plane which a piecewise linear increasing path can pass through while traversing from (0, 0) to (N, N). It is well-known that the expectation of L_N / N converges to 2 as N increases without bound. A perturbed version of this system can be introduced by superimposing an independent one-dimensional rate c > 0 Poisson process on the main diagonal line {x = y} of the plane. Given this modification, one asks whether and if so, how, the above limit might be affected. A particular question of interest has been whether this modified system exhibits a nontrivial phase transition in c. That is, whether there exists a threshold value c_0 > 0 such that the limit is preserved for all c < c_0 but lost outside this interval. Let L^c_N denote the maximal number of points in the system perturbed by c > 0 which an increasing piecewise linear path can pass through while traversing from (0, 0) to (N, N). In 2014, Basu, Sidoravicius, and Sly showed that there is no such phase transition and that, for all c > 0, the expectation of L^c_N / N converges to a number strictly greater than 2 as N increases without bound. This thesis gives an exposition of the arguments used to deduce this result.
  • Jylhä, Lotta (2022)
    Pólyan lauseen mukaan verkon Z^d symmetrinen satunnaiskävely on palautuva, jos d < 3 ja poistuva, jos d ≥ 3. Alunperin Georg Pólyan todistamalle lauseelle on ajan kuluessa muodostunut erilaisia todistusmenetelmiä. Tässä tutkielmassa syvennytään näistä kahteen toisiaan täydentävään menetelmään ja todistetaan Pólyan lause niiden avulla. Luvussa 5.1 Pólyan lauseelle esitetään laskennallinen todistus, joka tarjoaa yksinkertaisen ja konkreettisen tavan tutkia säännöllisen verkon satunnaiskävelyn käyttäytymistä. Luvussa 5.2 esitettävän virtauksen teorian avulla voidaan Pólyan lauseen lisäksi tutkia satunnaiskävelyn käyttäytymistä laajemmin eri verkoissa. Tarvittavat taustatiedot verkosta, Markovin ketjusta ja satunnaiskävelystä esitetään luvuissa 2 ja 3. Pólyan lauseen todistus on jaettu kahteen eri lukuun. Lauseen todistus alkaa luvusta 5.1, jossa verkon syklien ja polkujen lukumääriä tutkimalla Pólyan lause osoitetaan verkolle Z^d, missä d ≤ 3. Kombinatorinen todistus on idealtaan yksinkertainen, mutta siinä tehtävä arvio vaatii syvällisempää perustelua. Tutkielmassa tämä arvio toteutetaan Robbinsin kaavalla, joka on tarkempi arvio kirjallisuudessa useammin käytetylle Stirlingin kaavalle. Robbinsin kaava osoitetaan luvussa 4. Luvussa 5.2 esitetään verkon virtauksen teoria, jonka avulla Pólyan lause todistetaan verkolle Z^d, missä d > 3. Verkon virtauksen ja satunnaiskävelyn yhteys löytyy virtaukseen liittyvästä energian käsitteestä. Osoittautuu, että verkon virtauksista energialtaan pienimmän virtauksen energia riippuu verkon satunnaiskävelyn käyttäytymisestä. Tulos osoitetaan ensin äärelliselle verkolle, josta se johdetaan koskemaan ääretöntä verkkoa verkkoon liittyvän kontraktion käsitteen avulla. Luvussa 6 Pólyan lauseen merkitys korostuu, kun virtauslauseen avulla osoitetaan, että satunnaiskävelyn poistuvuus säilyy verkkojen kvasi-isometriassa. Tätä varten esitetään virtauslauseen seurauksia ja tarvittavat taustatiedot kvasi-isometriasta
  • Järviniemi, Olli (2021)
    This thesis is motivated by the following questions: What can we say about the set of primes p for which the equation f(x) = 0 (mod p) is solvable when f is (i) a polynomial or (ii) of the form a^x - b? Part I focuses on polynomial equations modulo primes. Chapter 2 focuses on the simultaneous solvability of such equations. Chapter 3 discusses classical topics in algebraic number theory, including Galois groups, finite fields and the Artin symbol, from this point of view. Part II focuses on exponential equations modulo primes. Artin's famous primitive root conjecture and Hooley's conditional solution is discussed in Chapter 4. Tools on Kummer-type extensions are given in Chapter 5 and a multivariable generalization of a method of Lenstra is presented in Chapter 6. These are put to use in Chapter 7, where solutions to several applications, including the Schinzel-Wójcik problem on the equality of orders of integers modulo primes, are given.
  • Rannikko, Juho (2023)
    Tämä maisterintutkielma käsittelee jatkuvia sekä diskreettejä systeemejä, jotka johtavat differentiaaliyhtälöihin ja differenssiyhtälöihin. Tutkielmassa tarkastellaan ja tulkitaan populaatioita, joiden yksilöt kokevat häirintä- ja paikkakilpailua. Häirintäkilpailussa on käytetty esimerkkinä jänispopulaatiota ja paikkakilpailussa aavikkorottapopulaatiota. Populaatioiden kokoja määräävät lukuisat eri tekijät. Päästäksemme alkuun, ensin määritellään yksilötason tapahtumat, joista johdetaan yksilötason prosessit. On valittava paras mahdollinen malli kuvaamaan systeemin tärkeimpiä dynamiikkoja, joiden pohjalta tämä malli tehdään. Tässä vaiheessa on tehtävä rajauksia yksilöiden käytöksen vaikutuksesta populaatiotasoon. Tutkielmassa nähdään kuinka saadut mallit voivat erota huomattavasti toisistaan kun tulkitaan yksilötason käytöstä eri tavoilla. Yksilötason mallista muodostetaan populaatiotason prosesseja kuvaavat yhtälöt ja tulkitaan niitä. Tämä mahdollistaa populaation elinvoimaisuuden mallintamisen pitkän ajan päähän. Tässä on huomioitava, että mallin tarkkuutta voi heikentää paljonkin yksilötasolla tapahtuvat muutokset. Esimerkiksi ympäristön kantokyvyn muutokset tai vieraslajin saapuminen systeemiin vaikuttavat myös tarkasteltavien populaatioiden kokoihin. Tässä tutkielmassa ei käsitellä vieraslajien vaikutusta tarkasteltaviin populaatioihin. Populaatiotason mallin muodostamisen jälkeen, tarkastellaan paikkakilpailua kokevan populaation elinvoimaisuutta tasapainopisteissä ja tutkitaan niiden stabiilisuutta. Tästä tehdään faasikaaviot ja näytetään graaffisesti, miten populaatiotiheys kehittyy eri muuttujien arvoilla. Tutkielmassa on käyty läpi eri muuttujien arvoja, jolloin systeemin stabiilisuus muuttuu. Tutkielman lopussa käydään uudelleen läpi aiemmissa luvuissa esitettyjä malleja ja tehdään niistä uudet tulkinnat. Muodostetaan jatkuvien mallien sijasta diskreetit mallit. Havaitaan, että tulkinta erot tuottavat hyvinkin erilaisia malleja. Otetaan myös tarkempaan käsittelyyn logistinen differenssiyhtälö ja tarkastellaan sen stabiilisuuden muutoksia eli bifurkaatioita. Havaitaan, että käsitelty logistinen differenssiyhtälö voi ilmaista kaoottista käytöstä. Käydään graafisesti läpi syitä logistisen differenssiyhtälön kaaottiseen käytökseen.
  • Nurmela, Janne (2022)
    The quantification of carbon dioxide emissions pose a significant and multi-faceted problem for the atmospheric sciences as a part of the research regarding global warming and greenhouse gases. Emissions originating from point sources, referred to as plumes, can be simulated using mathematical and physical models, such as a convection-diffusion plume model and a Gaussian plume model. The convection-diffusion model is based on the convection-diffusion partial differential equation describing mass transfer in diffusion and convection fields. The Gaussian model is a special case or a solution for the general convection-diffusion equation when assumptions of homogeneous wind field, relatively small diffusion and time independence are made. Both of these models are used for simulating the plumes in order to find out the emission rate for the plume source. An equation for solving the emission rate can be formulated as an inverse problem written as y=F(x)+ε where y is the observed data, F is the plume model, ε is the noise term and x is an unknown vector of parameters, including the emission rate, which needs to be solved. For an ill-posed inverse problem, where F is not well behaved, the solution does not exist, but a minimum norm solution can be found. That is, the solution is a vector x which minimizes a chosen norm function, referred to as a loss function. This thesis focuses on the convection-diffusion and Gaussian plume models, and studies both the difference and the sensibility of these models. Additionally, this thesis investigates three different approaches for optimizing loss functions: the optimal estimation for linear model, Levenberg–Marquardt algorithm for non-linear model and adaptive Metropolis algorithm. A goodness of different fits can be quantified by comparing values of the root mean square errors; the better fit the smaller value the root mean square error has. A plume inversion program has been implemented in Python programming language using the version 3.9.11 to test the implemented models and different algorithms. Assessing the parameters' effect on the estimated emission rate is done by performing sensitivity tests for simulated data. The plume inversion program is also applied for the satellite data and the validity of the results is considered. Finally, other more advanced plume models and improvements for the implementation will be discussed.
  • Anni, Andelin (2023)
    Predator—prey models can be studied from several perspectives each telling its own story about real-life phenomena. For this thesis the perspective chosen, is to include prey—rescue to the standoff between the predator and the prey. Prey--rescue is seen in the nature for many species, but to point one occurrence out, the standoff between a hyena and a lion. When a lion attacks a hyena, the herd of the hyena try to frighten the lion away. The rescue attempt can either be successful or a failure. In this thesis the prey-rescue model is derived for an individual rescuer and for a group of prey. For both cases, the aim is to derive the functional and numerical responses of the predator, but the focus is on the deriving and studying of the functional responses. First, a brief background to motivate the study of this thesis is given. The indroduction goes through the most important aspects of predator—prey modelling and gives an example of a simple, but broadly known Lotka—Volterra predator-prey model. The study begins with the simplest case of prey-rescue, the individual prey—rescue. First, the individual level states, their processes and all the assumptions of the model are introduced. Then, the model is derived and reduced with timescale separation to achieve more interpretable results. The functional response is formed after solving the quasi-equilibrium of the model. It was found that this way of constructing the model gives the popular Holling Type II functional response. Then, it is examined what follows when more and more prey get involved to the standoff trying to rescue the individual being attacked by. This is studied in three different time-scales: ultra—fast, intermediate, and slow timescales. The process of deriving the model and the functional response is like in the simple case of individual prey rescue, but the calculations get more intense. The functional response was found to be uninteresting. In conclusion, the model was adjusted. One of the timescales is left out from the studies in hopes for more interesting results. The derivation came out similar as in the third chapter, but with more advanced calculations and different results of quasi-equilibrium and functional response. The functional response obtained, was found to be worth of studying in a detailed fashion. This detailed study of the functional response obtained, is done last. It was found that different parameter choices affect the shape of the functional response. The parameters were chosen to be biologically relevant. Assuming that the rescue is certain for the group size n = 2, it was found that the functional response took a humpback form for some choices of the other parameters. The parameter ranges, for which the functional response had a humpback shape, were found.
  • Ranimäki, Jimi (2023)
    It is important that the financial system retains its functionality throughout the macroeconomic cycle. When people lose their trust in banks the whole economy can face dire consequences. Therefore accurate and stable predictions of the expected losses of borrowers or loan facilities are vital for the preservation of a functioning economy. The research question of the thesis is: What effect does the choice of calibration type have on the accuracy of the probability of default values predictions. The research question is attempted to be answered through an elaborate simulation of the whole probability of default model estimation exercise, with a focus on the rank order model calibration to the macroeconomic cycle. Various calibration functions are included in the study to offer more diversity in the results. Furthermore, the thesis provides insight into the regulatory environment of financial institutions, presenting relevant articles from accords, regulations and guidelines by international and European supervisory agents. In addition, the thesis introduces statistical methods for model calibration to the long-run average default rate. Finally, the thesis studies the effect of calibration type on the probability of default parameter estimation. The investigation itself is done by first simulating the data and then by applying multiple different calibration functions, including two logit functions and two Bayesian models to the simulated data. The simulation exercise is repeated 1 000 times for statistically robust results. The predictive power was measured using mean squared error and mean absolute error. The main finding of the investigation was that the simple grades perform unexpectedly well in contrast to raw predictions. However, the quasi moment matching approach for the logit function generally resulted in higher predictive power for the raw predictions in terms of the error measures, besides against the captured probability of default. Overall, simple grades and raw predictions yielded similar levels of predictive power, while the master scale approach lead to lower numbers. It is reasonable to conclude that the best selection of approaches according to the investigation would be the quasi moment matching approach for the logit function either with simple grades or raw predictions calibration type, as the difference in the predictive power between these types was minuscule. The calibration approaches investigated were significantly simplified from actual methods used in the industry, for example, calibration exercises mainly focus on the derivation of the correct long-run average default rate over time and this study used only the central tendency of the portfolio as the value.
  • Rantaniemi, Eero (2024)
    Tässä tutkielmassa tutustutaan proäärellisiin ryhmiin, siis topologisiin ryhmiin, jotka ovat isomorfisia äärellisten topologisten ryhmien muodostaman inverssisysteemin rajan kanssa. Tutkielman alussa esitetään topologisten ryhmien yleistä teoriaa, sekä tutustutaan inverssisysteemeihin yleisesti esittämällä kokoelma näiden ominaisuuksia. Tämän jälkeen tutkielmassa siirrytään käsittelemään proäärellisiä ryhmiä ja ja esitetään tärkeä karakterisaatio, jonka mukaan topologinen ryhmä on proäärellinen jos ja vain jos se on kompakti ja täysin epäyhtenäinen. Tästä seuraa Blairen lauseen nojalla, että jokainen proäärellinen ryhmä on Blairen avaruus. Tutkielman lopuksi käsitellään proäärellisiä täydellistymiä, erityisesti kokonaislukujen pro-p täydellistymämme, siis p-adisille luvuille, annetaan oma kappaleensa, jossa esitetään näiden konstruktio inverssisysteemin rajana sekä äärettömän pitkinä luonnollisina lukuina, sekä esitetään näiden ominaisuuksia. Näistä tärkeimpänä esitetään Henselin lemma, jonka avulla p-adisten lukujen polynomille löydetään juuri, kunhan sille on annettu riittähän hyvä arvio. Tällä tuloksella on käyttöä myös modulaariaritmeriikassa. Viimeisenä tutkielmassa esitetään kokonaislukujen proäärellinen täydellistymä
  • Hämäläinen, Jussi (2024)
    In this thesis, we aim to introduce the reader to profinite groups. Profinite groups are defined by two characteristics: firstly, they have a topology defined on them (notably, they are compact). Secondly, they are constructed from some collection of finite groups, each equipped with a discrete topology and forming what is known as an inverse system. The profinite group emerges as an inverse limit of its constituent groups. This definition is, at this point, necessarily quite abstract. Thus, before we can really understand profinite groups we must examine two areas: first, we will study topological groups. This will give us the means to deal with groups as topological spaces. Topological groups have some characteristics that differentiate them from general topological spaces: in particular, a topological group is always a homogeneous space. Secondly, we will explore inverse systems and inverse limits, which will take us into category theory. While we could explain these concepts without categories, this thesis takes the view that category theory gives us a useful “50000-feet view” by giving these ideas a wider mathematical context. In the second chapter, we will go through preliminary information concerning group theory, general topology and category theory that will be needed later. We will begin with some basic concepts from group theory and point-set topology. These sections will mostly contain information that is familiar from the introductory university courses. The chapter will then continue by introducing some basic concepts of category theory, including inverse systems and inverse limits. For these, we will give an application by showing how the Cantor set is homeomorphic to an inverse limit of a collection of finite sets. In the third chapter, we will examine topological groups and prove some of their properties. In the fourth chapter, we will introduce an example of profinite groups: Zp, the additive group of p-adic integers. This will be expanded into a ring and then into the field Qp. We will discuss the uses of Zp and Qp and show how to derive them as an inverse limit of finite, compact groups.