Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by discipline "Matematiikka"

Sort by: Order: Results:

  • Silfverberg, Miikka (Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2008)
  • Lempiäinen, Tuomo (2014)
    In this thesis we study the theoretical foundations of distributed computing. Distributed computing is concerned with graphs, where each node is a computing unit and runs the same algorithm. The graph serves both as a communication network and as an input for the algorithm. Each node communicates with adjacent nodes in a synchronous manner and eventually produces its own output. All the outputs together constitute a solution to a problem related to the structure of the graph. The main resource of interest is the amount of information that nodes need to exchange. Hence the running time of an algorithm is defined as the number of communication rounds; any amount of local computation is allowed. We introduce several models of distributed computing that are weaker versions of the well-established port-numbering model. In the port-numbering model, a node of degree d has d input ports and d output ports, both numbered with 1, 2, ..., d such that the port numbers are consistent. We denote by VVc the class of all graph problems that can be solved in this model. We define the following subclasses of VVc, corresponding to the weaker models: VV: Input and output port numbers are not necessarily consistent. MV: Input ports are not numbered; nodes receive a multiset of messages. SV: Input ports are not numbered; nodes receive a set of messages. VB: Output ports are not numbered; nodes broadcast the same message to all neighbours. MB: Combination of MV and VB. SB: Combination of SV and VB. This thesis presents a complete classification of the computational power of the models. We prove that the corresponding complexity classes form the following linear order: SB ⊈ MB = VB ⊈ SV = MV = VV ⊈ VVc. To prove SV = MV, we show that any algorithm receiving a multiset of messages can be simulated by an algorithm that receives only a set of messages. The simulation causes an additive overhead of 2∆ - 2 communication rounds, where ∆ is an upper bound for the maximum degree of the graph. As a new result, we prove that the simulation is optimal: it is not possible to achieve a simulation overhead smaller than 2∆ - 2. Furthermore, we construct a graph problem that can be solved in one round of communication by an algorithm receiving a multiset of messages, but requires at least ∆ rounds when solved by an algorithm receiving only a set of messages.
  • Puranen, Ilari (2018)
    We introduce a new model for contingent convertibles. The write-down, or equity conversion, and default of the contingent convertible are modeled as states of conditional Markov process. Valuation formulae for different financial contracts, like CDS and different types of contingent convertibles, are derived. The Model can be thought of as an extension to reduced form models with an additional state. For practical applications, this model could be used for new type of contingent convertible derivatives in a similar fashion than reduced form models are used for credit derivatives.
  • Kalaja, Eero (2020)
    Nowadays the amount of data collected on individuals is massive. Making this data more available to data scientists could be tremendously beneficial in a wide range of fields. Sharing data is not a trivial matter as it may expose individuals to malicious attacks. The concept of differential privacy was first introduced in the seminal work by Cynthia Dwork (2006b). It offers solutions for tackling this problem. Applying random noise to the shared statistics protects the individuals while allowing data analysts to use the data to improve predictions. Input perturbation technique is a simple version of privatizing data, which adds noise to whole data. This thesis studies an output perturbation technique, where the calculations are done with real data, but only suffcient statistics are released. With this method smaller amount of noise is required making the analysis more accurate. Yu-Xiang Wang (2018) improves the model by introducing an adaptive AdaSSP algorithm to fix the instability issues of the previously used Sufficient Statistics Perturbation (SSP) algorithm. In this thesis we will verify the results shown by Yu-Xiang Wang (2018) and look in to the pre-processing steps more carefully. Yu-Xiang Wang has used some unusual normalization methods especially regarding the sensitivity bounds. We are able show that those had little effect on the results and the AdaSSP algorithm shows its superiority over SSP algorithm also when combined with more common data standardization methods. A small adjustment for the noise levels is suggested for the algorithm to guarantee privacy conditions set by classical Gaussian Mechanism. We will combine different pre-processing mechanisms with AdaSSP algorithm and show a comparative analysis between them. The results show that Robust private linear regression by Honkela et al. (2018) makes significant improvements in predictions with half of the data sets used for testing. The combination of AdaSSP algorithm with robust private linear regression often brings us closer to non-private solutions.
  • Salonen, Ella (2020)
    In this thesis we prove a generalized form of Mercer's theorem, and go through the underlying mathematics involved in the result. Mercer's theorem is an important result in the theory of integral equations, as it can be used as a tool in solving the trace of integral operators. With certain assumptions on a topological space X and measure space (X,dµ), the generalized theorem states that the trace of a positive and self-adjoint bounded integral operator on L^2(X,dµ) with a continuous kernel can be determined by integrating the diagonal of the kernel function. The integral operator being trace class depends then on whether the value of the integral is finite or not. We start the thesis by introducing the general settings we have for the theorem, and provide wider background for the main assumptions. We assume that X is a locally compact Hausdorff space that is σ-compact, and µ is a Radon measure on X with support equal to X. We also need the following technical assumption. Since X is σ-compact, then there exists an increasing sequence of compact subsets C_n with union equal to X. We assume that for each C_n there exists a sequence of increasingly fine partitions, compatible with the measure µ. We then go through the basics on Banach spaces, and we introduce the L^p spaces. Theory on Hilbert spaces is represented in greater detail. We introduce some classes of bounded linear operators on Hilbert spaces, including self-adjoint and positive operators. Some spectral theory is considered, first for Banach algebras in general, and then for the Banach algebra of bounded linear operators on a complex Banach space. The space of bounded linear operators on a Hilbert space can be seen as a C^*-algebra, and results for the spectrum of different kind of Hilbert space operators are given. Compact operators are first defined on Banach spaces. We prove that they form a closed, two-sided ideal in the algebra of bounded linear operators on a Banach space. We also consider compact operators on a Hilbert space, and of special interest are the Hilbert-Schmidt integral operators on the space L^2, which are proven to be compact. The existence of the canonical decomposition for compact operators is proven as this property is used in several proofs of the thesis. In the final chapter we focus on the theory of Hilbert-Schmidt operators and trace class operators on Hilbert spaces. We show that operators in these classes are compact. Considering the Hilbert-Schmidt operators on the space L^2, we prove that they then correspond to the Hilbert-Schmidt integral operators. A trace is first defined for a positive operator, and then for a trace class operator. Finally, in the last section, we construct a proof for the generalized form of Mercer's theorem. As a result, we find a way to determine the trace of an integral operator that satisfies the assumptions described in the first paragraph.
  • Koskinen, Kalle Matias (2018)
    This thesis can be regarded as a light, but thorough, introduction to the algebraic approach to quantum statistical mechanics and a subsequent test of this framework in the form of an application to Bose-Einstein condensates. The success of the algebraic approach to quantum statistical mechanics hinges upon the remarkable properties of special operator algebras known as C^*-algebras. These algebras have unique characterization properties which allows one to readily identify the mathematical counterparts of concepts in physics while at the same time maintaining mathematical rigour and clarity. In the first half of this thesis, we focus on abstract C^*-algebras known as the canonical commutation relation algebras (CCR algebras) which are generated by elements satisfying specific commutation relations. The main result in this section is the proof of a certain kind of algebraic uniqueness of these algebras. The main idea of the proof is to utilise the underlying common structure of any of the CCR algebras and explicitly construct an isomorphism between the generators of these algebras. The construction of this isomorphism involves the use of abstract Fourier analysis on groups and various arguments concerning bounded operators. The second half of the thesis concerns the rigorous set-up of the formation of Bose-Einstein condensation. First, one defines the Gibbs grand canonical equilibrium states, and then we specialize to studying the taking of the thermodynamic limit of these systems in various contexts. The main result of this section involves two main elements. The first is that by fixing the temperature and density of the system while varying its activity and volume, there exists a limiting state corresponding to the taking of the thermodynamic limit. The second element concerns the existence of a critical density after which the limiting state begins to show the physical characteristics of Bose-Einstein condensation. The mathematical issues one faces with Bose-Einstein condensation are mainly related to the unboundedness of the creation and annihilation operators and the definition of the algebra that we are working on. The first issue is relevant to all areas of mathematical physics, and one deals with it in the standard ways. The second issue is more nuanced and is a direct result of the first issue we mentioned. In particular, we would like to define the states on an algebra which contains the operators that we are interested in. The problem is that these operators are unbounded, and, as a result, one must instead use the CCR algebra and show by extension that we can, in fact, also use the unbounded operators in this state.
  • Tiihonen, Leena (2020)
    Tämä tutkielma käsittelee algebran peruslausetta historiallisesta näkökulmasta. Työssä todistetaan algebran peruslause Lindelöfin oppikirjassa esitettyyn todistukseen tukeutuen. Lindelöfin esittämä algebran peruslauseen todistus perustuu Argandin todistuksen ideaan. Teoriapohjaksi tarvitaan tietoa kompleksianalyysista. Algebran peruslauseen paikkansapitävyys osoitetaan kompleksilukujen joukossa. Algebran peruslauseen todistuksessa todistaan neljä lausetta. Polynomifunktion modulin jatkuvuus osoitetaan ensin. Toisessa lauseessa osoitetaan, että polynomifunktion moduli saa sopivasti valitun ympyrän sisäpuolella ainakin yhdessä pisteessä pienemmän arvon kuin ympyrän ulkopuolella tai sen kehällä. Kolmannessa lauseessa osoitetaan, että ympyrän sisäpuolella on sellainen piste, jossa polynomifunktion moduli saa arvojensa infimumin. Lauseen todistuksessa osoitetaan polynomifunktion modulin saamien arvojen infimum rajatulle alueelle. Polynomifunktion infimumin sijaintia haarukoidaan jakamalla rajattu alue äärellisen moneen osajoukkoon, joista ainakin yhteen modulin saamien arvojen infimum voidaan paikallistaa. Sen osajoukon kohdalla jakoa toistamalla saadaan haluttu modulin infimum paikallistetuksi yhä pienemmälle alueelle. Neljännen lauseen avulla saadaan polynomifunktion modulin infimumille tarkka arvo. Todistuksessa käytetään napakoordinaattiesitystä. Polynomifunktion moduli saa infimumin ainakin yhdessä ympyrän sisäpuolella olevassa pisteessä ja siinä pisteessä infimumin arvo on nolla.
  • Wirzenius, Henrik (2012)
    Amenability is a notion that occurs in the theory of both locally compact groups and Banach algebras. The research on translation-invariant measures during the first half of the 20th century led to the definition of amenable locally compact groups. A locally compact group G is called amenable if there is a positive linear functional of norm 1 in L^∞(G)^* that is left-invariant with respect to the given group operation. During the same time the theory of Hochschild cohomology for Banach algebras was developed. A Banach algebra A is called amenable if the first Hochschild cohomology group H^1(A, X^*) = {0} for all dual Banach A-bimodules X^*, that is, if every continuous derivation D : A → X^* is inner. In 1972 B. E. Johnson proved that the group algebra L^1(G) for a locally compact group G is amenable if and only if G is amenable. This result justifies the terminology amenable Banach algebra. In this Master's thesis we present the basic theory of amenable Banach algebras and give a proof of Johnson's theorem.
  • Salonen, Tuomas (2017)
    In this work, an automated procedure for extracting chemical profiles of illicit drugs from chromatographic-mass spectrometric data is presented along with a method for comparison of the profiles using Bayesian inference. The described methods aim to ease the work of a forensic chemist who is tasked with comparing two samples of a drug, such as amphetamine, and delivering an answer to a question of the form 'Are these two samples from the same source?' Additionally, more statistical rigour is introduced to the process of comparison. The chemical profiles consist of the relative amounts of certain impurities present in seized drug samples. In order to obtain such profiles, the amounts of the target compounds must be recovered from chromatographic-mass spectrometric measurements, which amounts to searching the raw signals for peaks corresponding to the targets. The areas of these peaks must then be integrated and normalized by the sum of all target peak areas. The automated impurity profile extraction presented in this thesis works by first filtering the data corresponding to a sample, which includes discarding irrelevant parts of the raw data, estimating and removing signal baseline using the asymmetrical reweighed penalized least squares (arPLS) algorithm, and smoothing the relevant signals using a Savitzky-Golay (SG) filter. The SG filter is also used to estimate signal derivatives. These derivatives are used in the next step to detect signal peaks from which parameters are estimated for an exponential-Gaussian hybrid peak model. The signal is reconstructed using the estimated model peaks and optimal parameters are found by fitting the reconstructed signal to the measurements via non-linear least squares methods. In the last step, impurity profiles are extracted by integrating the areas of the optimized models for target compound peaks. These areas are then normalized by their sum to obtain relative amounts of the substances. In order to separate the peaks from noise, a model for noise dependency on signal level was fitted to replicate measurements of amphetamine quality control samples non-parametrically. This model was used to compute detection limits based on estimated baseline of the signals. Finally, the classical Pearson correlation based comparison method for these impurity profiles was compared to two Bayesian methods, the Bayes factor (BF) and the predictive agreement(PA). The Bayesian methods used a probabilistic model assuming normally distributed values with normal-gamma prior distribution for the mean and precision parameters. These methods were compared using simulation tests and application to 90 samples of seized amphetamine.
  • Pannila, Tomi (2016)
    In this master's thesis we develop homological algebra using category theory. We develop basic properties of abelian categories, triangulated categories, derived categories, derived functors, and t-structures. At the end of most oft the chapters there is a short section for notes which guide the reader to further results in the literature. Chapter 1 consists of a brief introduction to category theory. We define categories, functors, natural transformations, limits, colimits, pullbacks, pushouts, products, coproducts, equalizers, coequalizers, and adjoints, and prove a few basic results about categories like Yoneda's lemma, criterion for a functor to be an equivalence, and criterion for adjunction. In chapter 2 we develop basics about additive and abelian categories. Examples of abelian categories are the category of abelian groups and the category of R-modules over any commutative ring R. Every abelian category is additive, but an additive category does not need to be abelian. In this chapter we also introduce complexes over an additive category, some basic diagram chasing results, and the homotopy category. Some well known results that are proven in this chapter are the five lemma, the snake lemma and functoriality of the long exact sequence associated to a short exact sequence of complexes over an abelian category. In chapter 3 we introduce a method, called localization of categories, to invert a class of morphisms in a category. We give a universal property which characterizes the localization up to unique isomorphism. If the class of morphisms one wants to localize is a localizing class, then we can use the formalism of roofs and coroofs to represent the morphisms in the localization. Using this formalism we prove that the localization of an additive category with respect to a localizing class is an additive category. In chapter 4 we develop basic properties of triangulated categories, which are also additive categories. We prove basic properties of triangulated categories in this chapter and show that the homotopy category of an abelian category is a triangulated category. Chapter 5 consists of an introduction to derived categories. Derived categories are special kind of triangulated categories which can be constructed from abelian categories. If A is an abelian category and C(A) is the category of complexes over A, then the derived category of A is the category C(A)[S^{-1}], where S is the class consisting of quasi-isomorphisms in C(A). In this chapter we prove that this category is a triangulated category. In chapter 6 we introduce right and left derived functors, which are functors between derived categories obtained from functors between abelian categories. We show existence of right derived functors and state the results needed to show existence of left derived functors. At the end of the chapter we give examples of right and left derived functors. In chapter 7 we introduce t-structures. T-structures allow one to do cohomology on triangulated categories with values in the core of a t-structure. At the end of the chapter we give an example of a t-structure on the bounded derived category of an abelian category.
  • Sariola, Tomi (2019)
    Sometimes digital images may suffer from considerable noisiness. Of course, we would like to obtain the original noiseless image. However, this may not be even possible. In this thesis we utilize diffusion equations, particularly anisotropic diffusion, to reduce the noise level of the image. Applying these kinds of methods is a trade-off between retaining information and the noise level. Diffusion equations may reduce the noise level, but they also may blur the edges and thus information is lost. We discuss the mathematics and theoretical results behind the diffusion equations. We start with continuous equations and build towards discrete equations as digital images are fully discrete. The main focus is on iterative method, that is, we diffuse the image step by step. As it occurs, we need certain assumptions for these equations to produce good results, one of which is a timestep restriction and the other is a correct choice of a diffusivity function. We construct an anisotropic diffusion algorithm to denoise images and compare it to other diffusion equations. We discuss the edge-enhancing property, the noise removal properties and the convergence of the anisotropic diffusion. Results on test images show that the anisotropic diffusion is capable of reducing the noise level of the image while retaining the edges of image and as mentioned, anisotropic diffusion may even sharpen the edges of the image
  • Jääsaari, Jesse Sebastian (2013)
    Vuonna 1837 Peter Dirichlet todisti suuren alkulukuja koskevan tuloksen, jonka mukaan jokainen aritmeettinen jono {an + d}_{n=1}^{∞}, missä (a ,d) = 1, sisältää äärettömn monta alkulukua. Todistuksessa hän määritteli ns. Dirichlet'n karakterit joille löydettiin myöhemmin paljon käyttöä lukuteoriassa. Dirichlet'n karakteri χ (mod q) on jaksollinen (jakson pituutena q), täysin multiplikatiivinen aritmeettinen funktio, jolla on seuraava ominaisuus: χ(n) = 0 kun (n ,q) > 1 ja χ(n) \neq 0 kun (n ,q) = 1. Tässä Pro Gradu-tutkielmassa tutkitaan karakterisumman \mathcal{S}_χ(t) = \sum_{n ≤ t} χ(n) kokoa, missä t on positiivinen reaaliluku ja χ (mod q) on ei-prinsipaali Dirichlet'n karakteri. Triviaalisti jaksollisuudesta seuraa, että |\mathcal{S}_χ(t)| ≤ min(t, q). Ensimmäinen epätriviaali arvio on vuodelta 1918, jolloin George Pólya ja Ivan Vinogradov todistivat, toisistaan riippumatta, että |\mathcal{S}_χ(t)| << \sqrt qlog q uniformisti t:n suhteen. Tämä tunnetaan Pólya--Vinogradovin epäyhtälönä. Olettamalla yleistetyn Riemannin hypoteesin, Hugh Montgomery ja Robert Vaughan todistivat, että |\mathcal{S}_χ(t)| << \sqrt qloglog q vuonna 1977. Vuonna 2005 Andrew Granville ja Kannan Soundararajan osoittivat, että jos χ (mod q) on paritonta rajoitettua kertalukua g oleva primitiivinen karakteri, niin |\mathcal{S}_χ(t)|<<_g \sqrt q(log Q)^{1-\frac{δ_g}{2}+o(1)}, missä δ_g on g:stä riippuva vakio ja Q on q tai (log q)^{12} riippuen siitä oletetaanko yleistetty Riemannin hypoteesi. Todistus perustui teknisiin aputuloksiin, jotka saatiin muotoiltua teeskentelevyys-käsitteen avulla. Granville ja Soundararajan määrittelivät kahden multiplikatiivisen funktion, joiden arvot ovat yksikkökiekossa, välisen etäisyyden kaavalla \mathbb{D}(f, g; x) = \sqrt{\sum_{p≤ x}\frac{1-\Re(f(p) \overline g(p))}{p}}, ja sanoivat, että f on g-teeskentelevä jos \mathbb{D}(f, g; ∞) on äärellinen. Tällä etäisyydellä on paljon hyödyllisiä ominaisuuksia, ja niihin perustuvia menetelmiä kutsutaan teeskentelevyys-menetelmiksi. Johdannon jälkeen luvussa 2 esitetään määritelmiä ja perustuloksia. Luvun 3 tarkoitus on johtaa luvussa 6 tarvittavia aputuloksia. Luvussa 4 määritellään teeskentelevyys, todistetaan etäisyysfunktion \mathbb{D}(f, g; x) ominaisuuksia ja esitetään joitakin sovelluksia. Luvussa 5 johdetaan jälleen teknisiä aputuloksia, jotka seuraavat Montgomery--Vaughanin arviosta. Luvussa 6 tarkastellaan karakterisummia. Aloitamme todistamalla Pólya—Vinogradovin epäyhtälön ja Montgomery-Vaughanin vahvennoksen tälle. Päätuloksena johdamme arvion (1), jossa \frac{1}{2}δ_g on korvattu vakiolla δ_g. Tämän todisti alunperin Leo Goldmakher. Lopuksi käytämme teeskentelevyys-menetelmiä osoittamaan, että Pólya—Vinogradovin epäyhtälöä voi vahventaa jos karaktereista tehdään erilaisia oletuksia.
  • Rantalainen, Aapo (Helsingin yliopistoUniversity of HelsinkiHelsingfors universitet, 2006)
  • Xu, Tingting (2014)
    The high mortality rate among humans infected with certain types of Avian Influenza (AI) and the potential of a mutation that allows human-to-human transmission is a great concern for the public health. We formulate a mathematical model for the prevalence of AI in humans resulting from avian-to-human transmission. The model is important because the higher the prevalence, the higher the risk of a mutation that allows human-to-human transmission leading in a major epidemic. We formulate and analyse separate deterministic and stochastic versions of the model. Different time scale separation techniques are applied to the models. The influence of certain controllable parameters on the system equilibrium is interpreted from numerical results. Moreover, we also investigate the fluctuation of populations due to demographic stochasticity at the early stage of the prevalence of AI.
  • Mäkinen, Ville (2020)
    Objectives: The objective of this thesis is to illustrate the advantages of Bayesian hierarchical models in housing price modeling. Methods: Five Bayesian regression models are estimated for the housing prices. The models use a robust Student’s t-distribution likelihood and are estimated with Hamiltonian Monte Carlo. Four of the models are hierarchical such that the apartments’ neighborhoods are used as a grouping. Model stacking is also used to produce an ensemble model. Model checks are conducted using the posterior predictive distributions. The predictive distributions are also evaluated in terms of calibration and sharpness and using the logarithmic score with leave-one-out cross validation. The logarithmic scores are calculated using Pareto smoothed importance sampling. The R^2-statistics from the point predictions averaged from the predictive distributions are also presented. Results: The results from the models are broadly reasonable as, for the most part, the coefficients of the explanatory variables and the predictive distributions behave as expected. The results are also consistent with the existence of a submarket in central Helsinki where the price mechanism differs markedly from the rest of the Helsinki-Espoo-Vantaa region. However, model checks indicate that none of the models is well-calibrated. Additionally, the models tend to underpredict the prices of expensive apartments.
  • Björkqvist, Sebastian (2014)
    I avhandlingen definieras cellulära homologigrupper för cellkomplex, och med hjälp av dessa beräknas homologigrupperna för ett antal topologiska rum. Med cellkomplex menar man topologiska rum som byggs upp stegvis genom att börja med en diskret mängd punkter och sedan fästa n-celler ̄ B n till någon del av komplexet som finns från tidigare. Homologigrupper innebär att man associerar en algebraisk invariant, närmare sagt en abelsk grupp med ett topologiskt rum. För att kunna definiera de cellulära homologigrupperna definieras först de singulära homologigrupperna för godtyckliga topologiska rum, varefter de s.k. homologiaxiomen, även kända som Eilenberg–Steenrod-axiomen, presenteras. Med hjälp av homologiaxiomen beräknas de singulära homologigrupperna för enhetssfären S^n, och med hjälp av detta resultat samt med gradberäkningar av avbildningar från enhetssfären till sig själv konstrueras sedan de cellulära homologigrupperna för cellkomplex. Därefter bevisas det faktum att de singulära och cellulära homologigrupperna är isomorfa för alla cellkomplex. Utgående från dessa resultat kan man relativt enkelt beräkna homologigrupperna för många topologiska rum genom att ge rummet en cellstruktur och sedan beräkna rummets cellulära homologigrupper. För att demonstrera hur metoderna som presenterats i avhandlingen kan användas beräknas homologigrupperna för ett antal rum, bl.a. cylindern S^1 × I, torusen S^n × S^n och det reella projektiva n-rummet RP^n. I avhandlingen demonstreras även hur kunskapen om homologigrupperna för sfären S^n kan användas för att bevisa ett antal klassiska topologiska resultat, bl.a. Brouwers fixpunktsats samt det faktum att de Euklidiska rummen R^n och R^m är homeomorfa om och endast om n = m.
  • Annala, Toni (2016)
    Bézout's theorem, at least the original version, concerns the number of intersection points of two curves in projective plane. The main purpose of this thesis, apart from proving the classical version of Bézout's theorem, is to give multiple generalizations for it. The first proper chapter, Chapter 2, is devoted to the proof of classical Bézout's theorem. In the first two sections of the chapter we define projective and affine plane curves, and show some of their basic properties. In the third section we define the resultant of two polynomials, and use the newly acquired tool to prove the upper bound version of Bézout's theorem. The fourth section discusses the multiplicity of a point of intersection. This multiplicity, dependent of algebraic data associated to the intersection, is needed for stating the equality version of the classical Bézout's theorem. In the fifth section we prove this using properties of the intersection multiplicity proved in the fourth section. The third chapter extends the classical Bézout's theorem beyond its original scope. In Section 3.1 we define a crucial tool, Hilbert polynomial, which allows us to keep track of algebraic information associated to the projective scheme cut out by a homogeneous ideal. This polynomial is not an invariant of the scheme itself; rather it should be thought as containing information concerning both the intrinsic structure of the subscheme, and about how the subscheme is located in the ambient projective space. The second section of the third chapter quickly summarizes the parts of modern algebraic geometry that are of most use later. Section 3.3 gives the first proper generalization of Bézout's theorem. This generalization is more a quantitative than a qualitative one, as it deals with intersections of projective hyperplanes. The fourth chapter gives a generalization of the upper bound version of the Bézout's theorem to a very general case. We define the geometric multiplicity of a closed subscheme of a projective space, and show that it behaves well under intersections. The geometric multiplicity gives an upper bound for the number of components, hence the generalization of inequality version of Bézout. In the final section, 3.5, we define Serre's multiplicity of a component of intersection, and show that the multiplicities given by this formula satisfy the equality version of Bézout's theorem in proper intersections of equidimensional subchemes.
  • Haarala, Akseli (2018)
    The goal of this thesis is to introduce the isoperimetric inequality and various quantitative isoperimetric inequalities. The thesis has two parts. The first one is an overview of the isoperimetric inequality in R^n and some of the known quantitative isoperimetric inequalities. In the first chapter we introduce the isoperimetric inequality and show some possible methods of proving the isoperimetric inequality in R^n for both n=2 and n≥3. In the second chapter we discuss some known quantitative isoperimetric inequalities as well as their proofs. The second part of the thesis is a paper. In this paper we prove a Bonnnesen type inequality for so called s-John domains, s>1, in R^n. We show that the methods that have been applied to John domains in the literature, suitably modified, can be applied to s-John domains. Our result is new and gives a family of Bonnesen type inequalities depending on the parameter s>1.
  • Kaksonen, Aleks (2014)
    Vakuutusmallia määrittäessä vakuutusyhtiön mielenkiinto kohdistuu yhtiön oman vararikkotodennäköisyyden määrittämiseen. Luonnollinen ajatus on valita vakuutusmalli, jossa yhtiön vararikkotodennäköisyys minimoituu. Opinnäytetyössä raapaistaan tätä pintaa perehtymällä yleisesti liikennevakuutuksissa käytettävän bonusjärjestelmän kannattavuuteen vakuutusyhtiön asymptoottista vararikkoa tarkasteltaessa. Bonusjärjestelmässä vakuutettu siirtyy paremmalle vakuutustasolle jos kuluvan vuoden aikana ei tapahdu vahinkoja, mutta vakuutettu tippuu puolestaan huonommalle vakuutustasolle jos samaisena aikana vahinkoja sattuu. Yleisesti bonusjärjestelmän tarkoituksena on lisätä vakuutetuilta perittävien vakuutusmaksujen tasa-arvoisuutta, sillä huonon vakuutustason omaavan vakuutetun vakuutusmaksu on korkeampi kuin paremman vakuutustason omaavalla. Työn taustana toimii Lehtonen T. ja Nyrhinen H. 90-luvun julkaisu, jossa bonusjärjestelmiin perehdytään. Opinnäytetyön tarkoitus on esitellä yleinen bonusjärjestelmiin liittyvä teoria matemaattisesti ja perehtyä osaan edellä mainitun julkaisun tuloksista. Työssä näytetään, että tietyin perusoletuksin vakuutusyhtiön asymptoottinen vararikkotodennäköisyys on pienempi kaksitilaisessa bonusjärjestelmässä kuin yksitilaisessa kiinteällä vakuutusmaksulla. Alkuaskeleena työssä tullaan määrittämään bonusjärjestelmä yleisellä tasolla, jolloin malliin saadaan yksinkertaisia tasa-arvoehtoja vakuutusmaksuista ja vakuutetun liikkumisesta bonusjärjestelmässä. Yleisen tutustumisen jälkeen syvällinen tarkastelu aloitetaan keskittymällä kiinteään vakuutettuun. Tällöin perehdytään mallin kaksiulotteiseen siirtymämuuttujaan ja tämän ominaisuuksiin. Siirtymämuuttujan seurauksena saadaan yhteys vakuutetun tuottamaan vuotuiseen tappioon eri bonusluokissa. Tämän jälkeen pystytään tarkastelemaan yksittäisen vakuutetun pitkällä ajalla tuottamaa kumulatiivista tappiota. Kiinteän vakuutetun tarkastelun jälkeen luonnollinen siirtymä on koko vakuutuskannan mallintaminen. Koko vakuutuskannan tarkastelun aikana tullaan perehtymään vastaaviin ominaisuuksiin kuin aikaisemman vaiheen kiinteän vakuutetun tarkastelussa. Työn loppupuolella perehdytään vakuutusyhtiön asymptoottisen vararikon mallintamiseen ja luodaan yhteys asymptoottisen vararikon ja aikaisemmin käytyjen määritelmien välille. Yleisen tarkastelun jälkeen yleinen bonusjärjestelmä voidaan redusoida kaksi- ja yksitilaiseksi bonusjärjestelmäksi. Redusoinnin jälkeen pystytään vertailemaan syntyneitä bonusjärjestelmiä ja lopulta näyttämään, että kaksitilainen bonusjärjestelmä on parempi vaihtoehto vakuutusyhtiölle asymptoottisen vararikon näkökulmasta tarkasteltuna jos vakuutusyhtiön vakuutuskanta on riippumaton ja samoin jakautunut tai vakuutusyhtiö tuntee jokaisen vakuutetun vahinkojakaumat.