Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by master's degree program "Magisterprogrammet i teoretiska och beräkningsmetoder"

Sort by: Order: Results:

  • Kotipalo, Leo (2023)
    Simulating space plasma on a global scale is computationally demanding due to the system size involved. Modeling regions with variable resolution depending on physical behavior can save computational resources without compromising too much on simulation accuracy. This thesis examines adaptive mesh refinement as a method of optimizing Vlasiator, a global hybrid-Vlasov plasma simulation. Behavior of plasma near the Earth's magnetosphere and different characteristic scales that need to be considered in simulation are introduced. Kinetic models using statistical methods and fluid methods are examined. Modeling electrons kinetically requires resolutions orders of magnitude finer than ions, so in Vlasiator ions are modeled kinetically and electrons as a fluid. This allows for lighter simulation while preserving some kinetic effects. Mesh refinement used in Vlasiator is introduced as a method to save memory and computational work. Due to the structure of the magnetosphere, resolution isn't uniform in the simulation domain, with particularly the tail regions and magnetopause having rapid spatial changes compared to the relatively uniform solar wind. The region to refine is parametrized and static throughout a simulation run. Adaptive mesh refinement based on the simulation data is introduced as an evolution of this method. This provides several benefits: more rigorous optimization of refinement regions, easier reparametrization for different conditions, following dynamic structures and saving computation time in initialization. Refinement is done based on two indices measuring the spatial rate of change of relevant variables and reconnection respectively. The grid is re-refined at set intervals as the simulation runs. Tests similar to production runs show adaptive refinement to be an efficient replacement for static refinement. Refinement parameters produce results similar to the static method, while giving somewhat different refinement regions. Performance is in line with static refinement, and refinement overhead is minor. Further avenues of development are presented, including dynamic refinement intervals.
  • Sukuvaara, Satumaaria (2023)
    Many beyond the Standard Model theories include a first order phase transition in the early universe. A phase transition of this kind is presumed to be able to source gravitational waves that might be be observed with future detectors, such as the Laser Interferometer Space Antenna. A first order phase transition from a symmetric (metastable) minimum to the broken (stable) one causes the nucleation of broken phase bubbles. These bubbles expand and then collide. It is of importance to examine how the bubbles collide in depth, as the events during the collision affect the gravitational wave spectrum. We assume the field to interact very weakly or not at all with the particle fluid in the early universe. The universe also experiences fluctuations due to thermal or quantum effects. We look into how these background fluctuations affect the field evolution and bubble collisions during the phase transition in O(N) scalar field theory. Specifically, we numerically simulate two colliding bubbles nucleated on top of the background fluctuations, with the field being a N-dimensional vector in the O(N) group. Due to the symmetries present, the system can be examined in cylindrical coordinates, lowering the number of simulated spatial dimensions. In this thesis, we perform the calculation of initial state fluctuations and simulate them and two bubbles numerically. We present results of the simulation of the field, concentrating on the effects of fluctuations on the O(N) scalar field theory.
  • Stirling, Nico Toivo (2023)
    In this thesis a computation of the non-perturbative Lorentzian graviton propagator, which has appeared in the literature, is outlined. Firstly, the necessary ingredients for the computation are introduced and discussed. This includes; General Relativity (GR), its path integral quantisation around a Minkowski space background, and the definition of the graviton propagator along with its relation to the one-particle-irreducible (1PI) graviton 2-point function. A brief discussion on the perturbative non-renormalizability of the theory is followed by the introduction of the functional renormalization group (fRG) equation from which a fRG equation for the scalar coefficient function of the transverse-traceless (TT) 1PI graviton 2-point function is derived. After these ingredients have been introduced we proceed to outline the computation in question, skipping the details of its most involved steps. The computation starts by defining the spectral function and the Källén-Lehmann spectral representation of propagators. The non-perturbative TT 1PI graviton 2-point function, the propagators and the spectral functions, are parameterized and the fRG flow equation for the TT 1PI graviton 2-point function is used together with certain renormalization conditions to define renormalization group (RG) flow equations for these parameters. The solution of the flow of the parameters is displayed and is used to construct the graviton spectral function and the graviton propagator, which are both displayed graphically. Finally, a discussion of the features of the spectral function and propagator are given, and these results are briefly discussed in the context of the asymptotic safety program for quantum gravity and some of its open issues.
  • Rasola, Miika (2020)
    Resonant inelastic X-ray scattering (RIXS) is one of the most powerful synchrotron based methods for attaining information of the electronic structure of materials. Novel ultra-brilliant X-ray sources, X-ray free electron lasers (XFEL), offer new intriguing possibilities beyond the traditional synchrotron based techniques facilitating the transition of X-ray spectroscopic methods to the nonlinear intensity regime. Such nonlinear phenomena are well known in the optical energy range, less so in X-ray energies. The transition of RIXS to the nonlinear region could have significant impact on X-ray based materials research by enabling more accurate measurements of previously observed transitions, allowing the detection of weakly coupled transitions on dilute samples and possibly uncovering completely unforeseen information or working as a platform for novel intricate methods of the future. The nonlinear RIXS or stimulated RIXS (SRIXS) on XFEL has already been demonstrated in the simplest possible proof of concept case. In this work a comprehensive introduction to SRIXS is presented from a theoretical point of view starting from the very beginning, thus making it suitable for anyone with the basic understanding of quantum mechanics and spectroscopy. To start off, the principles of many body quantum mechanics are revised and the configuration interactions method for representing molecular states is introduced. No previous familiarity with X-ray matter interaction or RIXS is required as the molecular and interaction Hamiltonians are carefully derived, based on which a thorough analysis of the traditional RIXS theory is presented. In order to stay in touch with the real world, the basic experimental facts are recapped before moving on to SRIXS. First, an intuitive picture of the nonlinear process is presented shedding some light onto the term \textit{stimulated} while introducing basic terminology and some X-ray pulse schemes along with futuristic theoretical examples of SRIXS experiments. After this, a careful derivation of the Maxwell-Liouville-von Neumann theory up to quadrupole order is presented for the first time ever. Finally, the chapter is concluded with a short analysis of the experimental status quo on XFELs and some speculation on possible transition metal samples where SRIXS in its current state could be applied to observe quadrupole transitions advancing the field remarkably.
  • Mäkelä, Noora (2022)
    Sum-product networks (SPN) are graphical models capable of handling large amount of multi- dimensional data. Unlike many other graphical models, SPNs are tractable if certain structural requirements are fulfilled; a model is called tractable if probabilistic inference can be performed in a polynomial time with respect to the size of the model. The learning of SPNs can be separated into two modes, parameter and structure learning. Many earlier approaches to SPN learning have treated the two modes as separate, but it has been found that by alternating between these two modes, good results can be achieved. One example of this kind of algorithm was presented by Trapp et al. in an article Bayesian Learning of Sum-Product Networks (NeurIPS, 2019). This thesis discusses SPNs and a Bayesian learning algorithm developed based on the earlier men- tioned algorithm, differing in some of the used methods. The algorithm by Trapp et al. uses Gibbs sampling in the parameter learning phase, whereas here Metropolis-Hasting MCMC is used. The algorithm developed for this thesis was used in two experiments, with a small and simple SPN and with a larger and more complex SPN. Also, the effect of the data set size and the complexity of the data was explored. The results were compared to the results got from running the original algorithm developed by Trapp et al. The results show that having more data in the learning phase makes the results more accurate as it is easier for the model to spot patterns from a larger set of data. It was also shown that the model was able to learn the parameters in the experiments if the data were simple enough, in other words, if the dimensions of the data contained only one distribution per dimension. In the case of more complex data, where there were multiple distributions per dimension, the struggle of the computation was seen from the results.
  • Pirttikoski, Antti (2021)
    LHC is the highest energy particle collider ever built and it is employed to study elementary particles by colliding protons together. One intriguing study subject at LHC is the stability of the electroweak vacuum in our universe. The current prediction suggests that the vacuum is in the metastable state. The stability of the vacuum is dependent on the mass of the top quark, and it is possible that more precise measurement of the mass could shift the prediction to the border of the metastable and stable states. In order to measure the mass of the top quark more precisely, we need to measure the bottom (b) quarks decaying from it at high precision, as top quark decays predominantly into a W boson and a b quark. Due to the phenomenon called hadronisation, we can not measure the quarks directly, but rather as sprays of collimated particles called jets. The jets originating from b quarks (b jet) can be identified by b-tagging. The precise measurement and calibration of the b jet energy is crucial for top quark mass measurement. This thesis studies the b jets and their energy calibration at the CMS, which is one of the general purpose detectors along the LHC. Especially the b jet energy scale (bJES) is under the investigation and the various phenomena affecting to it. For example, large fraction of b jets contain neutrinos, which cannot be measured directly. This increases uncertainties related to the energy measurement. Also there are problems how precisely the formation and evolution of the b jets can be modelled by Monte Carlo event generators, such as Pythia8, which was utilized in this thesis. The aim of this thesis is to evaluate how big effect on the bJES is caused by the various different phenomena, which presumably weaken the precision of the b jet measurements. The studied phenomena are the semileptonic branching ratios of b hadrons, branching ratios of b hadron to c hadron decays, b hadron production fraction and parameterization of the b quark fragmentation function. The combined effect of all four different rescaling features mentioned above, suggests that bJES is known at 0.2% level. A small shift of -0.1% in the Missing transverse energy Projection Fraction (MPF) response scale is detected at low pt values, which vanishes as the pt increases. This improves remarkably 0.4-0.5% JES accuracy achieved during at CMS during Run 1 of the LHC. However, there are still many ways we can improve the performance presented here. Definitely there is a need for further studies of the rescaling methods before results could be utilized in the corrections of bJES to do precision measurement of the top quark mass.
  • Kähärä, Jaakko (2022)
    We study the properties of flat band states of bosons and their potential for all-optical switching. Flat bands are dispersionless energy bands found in certain lattice structures. The corresponding eigenstates, called flat band states, have the unique property of being localized to a small region of the lattice. High sensitivity of flat band lattices to the effects of interactions could make them suitable for fast, energy efficient switching. We use the Bose-Hubbard model and computational methods to study multi-boson systems by simulating the time-evolution of the particle states and computing the particle currents. As the systems were small, fewer than ten bosons, the results could be computed exactly. This was done by solving the eigenstates of the system Hamiltonian using exact diagonalization. We focus on a finite-length sawtooth lattice, first simulating weakly interacting bosons initially in a flat band state. Particle current is shown to typically increase linearly with interaction strength. However, fine-tuning the hopping amplitudes and boundary potentials, particle current through the lattice is highly suppressed. We use this property to construct a switch which is turned on by pumping the input with control photons. Inclusion of particle interactions disrupts the system, resulting in a large non-linear increase in particle current. We find that certain flat band lattices could be used as medium for an optical switch capable of controlling the transport of individual photons. In practice, highly optically nonlinear materials are required to reduce the switching time which is found to be inversely proportional to the interaction strength.
  • Nurmela, Mika (2022)
    We study a system of cold high-density matter consisting purely of quarks and gluons. The mathematical construction of Quantum Chromodynamics (QCD) introduces interactions between the fields, which modify the thermodynamic properties of the system. In the presence of interactions, we can not solve the thermodynamic properties of the system analytically. The method is to expand the result in a series in terms of the QCD coupling constant. This is referred to as the perturbation theory in the context of thermal field theory (TFT). The coupling constant describes the strength of the interaction. We introduce the basic calculation methods used in the QCD and the TFTs in general. We will also include the chemical potential associated with the number of quarks in the system in the calculation. In the case of zero temperature, quarks form a Fermi-sphere such that energy states lower than the chemical potential will be Pauli blocked. The resulting fermionic momentum integrals are modified as a consequence. We can split these integrals into two parts, referred to as the vacuum and matter parts. We can split the calculation of the pressure into two distinct contributions: one from skeleton diagrams and one from ring diagrams. The ring diagrams have unphysical IR divergences that we can not cancel using the counterterms. This is why hard thermal loop (HTL) effective field theory (EFT) is introduced. We will discuss this HTL framework, which requires the computation of the matter part of the gluon polarization tensor, which we will also evaluate in this thesis.
  • Jääskeläinen, Matias (2020)
    This thesis is about exploring descriptors for atmospheric molecular clusters. Descriptors are needed for applying machine learning methods for molecular systems. There is a collection of descriptors readily available in the DScribe-library developed in Aalto University for custom machine learning applications. The question of which descriptors to use is up to the user to decide. This study takes the first steps in integrating machine learning into existing procedure of configurational sampling that aims to find the optimal structure for any given molecular cluster of interest. The structure selection step forms a bottleneck in the configurational sampling procedure. A new structure selection method presented in this study uses k-means clustering to find structures that are similar to each other. The clustering results can be used to discard redundant structures more effectively than before which leaves fewer structures to be calculated with more expensive computations. Altogether that speeds up the configurational sampling procedure. To aid the selection of suitable descriptor for this application, a comparison of four descriptors available in DScribe is made. A procedure for structure selection by representing atmospheric clusters with descriptors and labeling them into groups with k-means was implemented. The performance of descriptors was compared with a custom score suitable for this application, and it was found that MBTR outperforms the other descriptors. This structure selection method will be utilized in the existing configurational sampling procedure for atmospheric molecular clusters but it is not restricted to that application.
  • Pruikkonen, Sanni (2021)
    Stacking of antiaromatic molecules leads to enhanced stability and higher conductivity due to reversed antiarotmaticity. It has been shown that cyclophenes consisting of antiaromatic Ni(II) norrcorrole subunits have a vertical current-density flux between the two metal ions. The Ni(II) meso- substituted dibenzotetraaza[14]annulene complex fulfills the Hückel rule for being antiaromatic. Upon increasing the temperature above 13 K, the effective magnetic moment of solid state Ni(II) meso-substituted dibenzotetraaza[14]annulene changes from being diamagnet to paramagnetic. A suggested explanation for this is that there might be weak interaction between the Ni atoms. In this study the possibility of the existence of vertical current-density flux between the two metal ions in the Ni(II) meso-substituted dibenzotetraaza[14]annulene is investigated. In addition, the effect of the Ni and N atoms in Ni(II) 1,5,9,13-tetraaza[16]annulene was studied by replacing Ni and Zn and N with O. Electronic motion in molecules that are under the influence of a magnetic field is investigated computationally, since at present there is no routine experimental method for doing that. TURBOMOLE, the Gauge-including Magnetically Induced Currents method and Paraview were employed in this study for structure optimization of the molecules, calculation of current-density flux and current strength in the molecules and visualisation of the current-density pathways respectively. The results of this study does not show any current transport between the subunits in the Ni(II) meso-substituted dibenzotetraaza[14]annulene complex. Both the Ni(II) 1,5,9,13-tetraaza[16]annulene and the Zn(II) 1,5,9,13-tetraaza[16]annulene are aromatic but they were not stacked due to their distorted structure. The (2Z,7Z,10Z,14Z)-1,9-dioxa-5,13-diazacyclohexadeca-2,7,10,14-tetraene-5,13-diide complexes with either Zn(II) or Ni(II) were both non-aromatic as well as the Ni(II) (2Z,7Z,10Z,14Z)-1,9-dioxa-5,13-diazacyclohexadeca-2,7,10,14-tetraene-5,13-diide dimer.
  • Laurila, Sara (2023)
    Certain topological phases of matter exhibit low-energy quasiparticles that closely resemble relativistic Weyl fermions due to their linear dispersion. This notion leads to a quasirelativistic description for these non-relativistic condensed matter quasiparticles. In relativistic quantum field theory, Weyl fermions are subject to chiral anomalies when coupled to gauge fields or non-trivial background geometries. Condensed matter Weyl quasiparticles similarly experience anomalies from their background fields, leading to anomalous transport phenomena. We review the field theory of relativistic fermions in curved spacetimes with torsion, and the macroscopic BCS theory of superconductors and superfluids. Using the example of p+ip-paired superfluids and superconductors, we show how their gapless excitations are quasirelativistic Weyl fermions in an emergent spacetime determined by their background fields. With a simple Landau level argument, we then argue that the presence of torsion in this emergent spacetime leads to a chiral anomaly for the Weyl quasiparticles. In the context of relativistic theory, the torsional contribution to the chiral anomaly is controversial, not least because it depends on a non-universal UV cut-off. The Landau level calculation presented here is also ambiguous for relativistic Weyl fermions. However, as we will show, the quasirelativistic approximation we use and the properties of the underlying superfluid or superconductor lead to a natural cut-off for the quasiparticle anomaly. We match this emergent torsional anomaly to the hydrodynamic anomaly in the p+ip-superfluid 3He-A.
  • Vuojamo, Joonas (2022)
    Topological defects and solitons are nontrivial topological structures that can manifest as robust, nontrivial configurations of a physical field, and appear in many branches of physics, including condensed matter physics, quantum computing, and particle physics. A fruitful testbed for experimenting with these fascinating structures is provided by dilute Bose–Einstein condensates. Bose–Einstein condensation was first predicted in 1925, and Bose–Einstein condensation was finally achieved in a dilute atomic gas for the first time in 1995 in a breakthrough experiment. Since then, the study of Bose–Einstein condensates has expanded to the study of a variety of nontrivial topological structures in condensates of various atomic species. Bose–Einstein condensates with internal spin degrees of freedom may accommodate an especially rich variety of topological structures. Spinor condensates realized in optically trapped ultracold alkali atom gases can be conveniently controlled by external fields and afford an accurate mean-field description. In this thesis, we study the creation and evolution of a monopole-antimonopole pair in such a spin-1 Bose–Einstein condensate by numerically solving the Gross–Pitaevskii equation. The creation of Dirac monopole-antimonopole pairs in a spin-1 Bose–Einstein condensate was numerically demonstrated and a method for their creation was proposed in an earlier study. Our numerical results demonstrate that the proposed creation method can be used to create a pair of isolated monopoles with opposite topological charges in a spin-1 Bose–Einstein condensate. We found that the monopole-antimonopole pair created in the polar phase of the spin-1 condensate is unstable against decay into a pair of Alice rings with oscillating radii. As a result of a rapid polar-to-ferromagnetic transition, these Alice rings were observed to decay by expanding on a short timescale.
  • Sassi, Sebastian (2019)
    When the standard model gauge group SU(3) × SU(2) × U(1) is extended with an extra U(1) symmetry, the resulting Abelian U(1) × U(1) symmetry introduces a new kinetic mixing term into the Lagrangian. Such double U(1) symmetries appear in various extensions of the standard model and have therefore long been of interest in theoretical physics. Recently this kinetic mixing has received attention as a model for dark matter. In this thesis, a systematic review of kinetic mixing and its physical implications is given, some of the dark matter candidates relying on kinetic mixing are considered, and experimental bounds for kinetic mixing dark matter are discussed. In particular, the process of diagonalizing the kinetic and mass terms of the Lagrangian with a suitable basis choice is discussed. A rotational ambiquity arises in the basis choice when both U(1) fields are massless, and it is shown how this can be addressed. BBN bounds for a model with a fermion in the dark sector are also given based on the most recent value of the effective number of neutrino species, and it is found that a significant portion of the FIMP regime is excluded by this constraint.
  • Sirkiä, Topi (2023)
    The QCD axion arises as a necessary consequence of the popular Peccei-Quinn solution to the strong CP problem in particle physics. The axion turns out to very naturally possesses all the usual qualities of a good dark matter (DM) candidate. Having the potential to solve two major problems in particle cosmology in one fell swoop makes the axion a very attractive prospect. In recent years, the weakening of the traditional WIMP dark matter paradigm and axion search experiments just beginning to reach the sensitivities required to look for the QCD axion have further increased interest in axion physics. In this thesis, the basics of axion physics are reviewed, and an in-depth exposition of common direct detection experiments and astrophysical and laboratory limits is given. Particular emphasis is placed on direct detection by using the axion-photon coupling as it is the only coupling in which experimental sensitivity is enough to probe the QCD axion. The benchmark experiments of light-shining-through-wall (LSTW), helioscopes and cavity haloscopes are given a thorough theoretical treatment. Other couplings and related experiments are relevant when looking for axion-like particles (ALPs), which are postulated by various extensions of the Standard Model but which do not solve the strong CP problem. A general overview of the prevalent ALP-searches is given. Most of the described experimental setups, with some exceptions, are actually searches for very general weakly interacting particles, WISPs, with a certain coupling. The searches are thus well motivated regardless of the future standing of the QCD axion. A chapter is dedicated to axion dark matter and its creation mechanisms, in particular the misalignment mechanism. Two scenarios are mapped out, depending on whether the Peccei-Quinn symmetry spontaneously breaks before or after inflation. Both cases have experimental implications, which are compared. These considerations motivate an axion dark matter window which should be prioritized by experiments. A significant part of this thesis is dedicated to mapping out the experimental landscape of axions today. The up-to-date astrophysical and laboratory limits on the most prominent axion couplings along with projections of some near-future experiments are compiled into a set of exclusion plots.
  • Vihko, Sami Vihko (2022)
    We will review techniques of perturbative thermal quantum chromodynamics (QCD) in the imaginary-time formalism (ITF). The Infrared (IR)-problems arising from the perturbative treatment of equilibrium thermodynamics of QCD and their phenomenological causes will be investigated in detail. We will also discuss the construction of two effective field theory (EFT) frameworks most often used in modern high precision calculations to overcome these. The EFTs are the dimensionally reduced theories EQCD and MQCD and Hard thermal loop effective theory (HTL). EQCD is three-dimensional Euclidean Yang-Mills theory coupled to an adjoint scalar field and MQCD is three-dimensional Euclidean pure Yang-Mills theory. The effective parameters in these theories are determined through matching calculations. HTL is based on resummation of hard thermal loops and uses effective propagators and vertex functions. We will also discuss the determination of the pressure of QCD perturbatively. In general, this thesis details calculations and the methodology.
  • Jussila, Joonas (2019)
    In this thesis, the sputtering of tungsten surfaces is studied under ion irradiation using molecular dynamics simulations. The focus of this work is on the effect of surface orientation and incoming angle on tungsten sputtering yields. We present a simulation approach to simulate sputtering yields of completely random surface orientations. This allows obtaining the total sputtering yields averaged over a large number of arbitrary surface orientations, which are representative to the sputtering yield of a polycrystalline sample with random grain orientations in a statistically meaningful way. In addition, a completely different method was utilised to simulate sputtering yields of tungsten fuzz surfaces with various fuzz structure heights. We observe that the total sputtering yield of the investigated surfaces are clearly dependent on the surface orientation and the sputtering yields of average random surfaces are different compared to the results of any of the low index surfaces or their averages. The low index surfaces and the random surface sputtering yields also show a dependence of the incoming angle of the projectile ions. In addition, we calculate the outgoing angular distribution of sputtered tungsten atoms in every bombardment case, which likewise shows to be a sensitive to the surface orientation. Finally, the effect of fuzz height on the sputtering yield of tungsten fuzz surfaces is discussed. We see that tungsten fuzz significantly reduces the sputtering yield compared to a pristine tungsten surface and the effect is already seen when the fuzz pillar height is a few atom layers.
  • Åström, Hugo (2022)
    I discuss recent work regarding electronic structure calculations on quantum computers. I introduce quantum computing and electronic structure theory, and then discuss different mappings from electrons and excitation operators, to qubits and unitary operators, mainly Jordan–Wigner and Bravyi–Kitaev. I discuss adiabatic quantum computing in connection to state preparation on quantum computers. I introduce the most important algorithms in the field, namely, quantum phase estimation (QPE) and variational quantum eigensolver (VQE). I also mention recent modifications and improvements to these algorithms. Then I take a detour to discuss noise and quantum operations, a model for understanding how quantum computations fail because of noise from the environment. Because of this noise, quantum simulators have risen as a tool for understanding quantum computers and I have used such simulators to do electronic structure calculations on small atoms. The algorithm I have used, QPE, yields the exact result within the employed basis. As a basis I use numerical orbitals, which are very robust due to their flexibility.
  • Mäki-Iso, Emma (2021)
    Sijoitusten markkinariskin suuruutta tarkastellaan usein riskimittojen avulla. Riskimitta on kuvaus mahdollisia tappioita kuvaavien satunnaismuuttujien joukosta reaalilukuihin. Riski- mittojen avulla erilaisten sijotusten riskillisyyttä pystytään vertailemaan helposti. Pankki- valvojat hyödyntävät riskimittoja pankkien vakavaraisuuden valvonnassa. Pisimpään ylei- sessä käytössä on ollut VaR (Value-at-Risk) niminen riskimitta. VaR kertoo suurimman tappion, joka koetaan jollain asetetulla luottamustasolla α eli se on tappiojakauman α- kvantiili. Baselin uusimassa ohjeistuksessa (Minimum capital requirements of market risk) odotettu vaje niminen riskimitta korvaa VaR-riskimitan pääomavaateen laskennassa. Odotet- tu vaje kertoo, mikä on tappion odotusarvo silloin, kun tappio on suurempi kuin VaR- riskimitan antama luku. Riskimittaa ollaan vaihtamassa, koska VaR ei ole teoreettisilta ominaisuuksiltaan yhtä hyvä kuin odotettu vaje. Tämä johtuu siitä, että VaR ei ole sub- additiivinen, mikä tarkoittaa sitä, että positioiden yhteenlaskettu riski voi olla joissain ta- pauksissa suurempi kuin yksittäisten positioiden riskien summa. Tämä johtaa siihen, että hajauttamattoman sijoitussalkun riski voi olla pienempi kuin hajautetun. Odotettu vaje-riskimitta ei kuitenkaan ole täysin ongelmaton, koska se ei ole konsistentisti pisteytyvä, mikä tarkoittaa, että sille ei ole olemassa pisteytysfunktiota, jonka avulla voi- taisiin verrata estimoituja ja toteutuneita arvoja konsistentisti. Lisäksi se, että odotetun vajeen suuruus riippuu kaikista häntään jäävistä tappioista, tekee siitä herkän hännässä olevien tappioiden virheille. Tämä ei ole kovin hyvä ominaisuus, koska tappiojakaumien häntien estimointiin liittyy paljon epävarmuutta. Koska riskien estimointiin liittyy epävarmuutta, sääntely velvoittaa pankkeja toteumates- taamaan regulatiivisen pääomavaateen laskennassa käytettyjä riskiestimaatteja. Toteuma- testaamisella tarkoitetaan prosessia, jossa estimoituja riskilukuja verrataan toteutuneisiin tappioihin. VaR-estimaattien toteumatestaus perustuu niiden päivien lukumäärälle tes- tausjaksolla, joina tappio ylittää VaR-estimaatin antaman tappiotason. Odotetulle vajeelle ei ole vielä olemassa yhtä vakiintuneita toteumatestausmenetelmiä kuin VaR-estimaateille. Tässä tutkielmassa esitellään kolme erilaista tapaa toteumatestata odotettu vaje estimaat- teja, nämä tavat esittelivät Kratz kollegoineen, Moldenhauer ja Pitera sekä Costanzino ja Curran. Menetelmissä tarkastellaan useamman VaR-tason yhtäaikaisia ylityksiä, suojatun position eli tappion ja riskiestimaatin erotuksen positiiviseen lukuun kumuloitu- vasti summautuvien havaintojen määrää ja VaR-ylityksien keskimääräistä suuruutta. Tutkielman laskennallisessa osuudessa tutkittiin antavatko VaR- ja odotettu vaje- toteumatestit samanlaisia tuloksia ja vaikuttaako riskin estimointiin käytetyn havainto- jakson pituus estimaattien suoriutumiseen toteumatesteissä. Laskelmissa havaittiin, että odotettu vaje- ja VaR-toteumatestit antoivat samanlaisia tuloksia. Markkinadatasta eri kokoisilla estimointi-ikkunoilla lasketut estimaatit saivat toteumatestissä erikokoisia tes- tisuureiden arvoja, ja hyväksyivät väärän mallin tai hylkäsivät oikean malli eri todennä- köisyyksillä. Kun käytettiin puhtaasti simuloitua dataa, eri kokoisilla estimointi-ikkunoilla laskettujen estimaattien tuloksissa ei ollut eroja. Näin voidaan päätellä, että testitulosten erot eri mittaisilla havaintojaksoilla laskettujen estimaattien välillä eivät johdu pelkästään havaintojen määrästä vaan myös laadusta.
  • Pirnes, Sakari (2023)
    The Smoluchowski coagulation equation is considered to be one of the most fundamental equations of the classical description of matter alongside with the Boltzman, Navier-Stokes and Euler equations. It has applications from physical chemistry to astronomy. In this thesis, a new existence result of measure valued solutions to the coagulation equation is proven. The proven existence result is stronger and more general than a previously claimed result. The proven result holds for a generic class of coagulation kernels, including various kernels used in applications. The coagulation equation models binary coagulation of objects characterized by a strictly positive real number called size, which often represents mass or volume in applications. In binary coagulation, two objects can merge together with a rate characterized by the so-called coagulation kernel. Time evolution of the size distribution is given by the coagulation equation. Traditionally the coagulation equation has two forms, discrete and continuous, which are referring to whether the objects sizes can take discrete or continuous values. A similar existence result to the one proven in this thesis has been obtained for the continuous coagulation equation, while the discrete coagulation equation is often favored in applications. Being able to study both discrete and continuous systems and their mixtures at the same time has motivated the study of measure valued solutions to the coagulation equation. After motivating the existence result proven in this thesis, its proof is organized into four Steps described at the end of the introduction. The needed mathematical tools and their connection to the four Steps are presented in chapter 2. The precise mathematical statement of the existence result is given in chapter 3 together with Step 1, where the coagulation equation will be regularized using a parameter ε ∈ (0, 1) into a more manageable regularized coagulation equation. Step 2 is done in chapter 4 and it consists of proving existence and uniqueness of a solution f_ε for each regularized coagulation equation. Step 3 and Step 4 are done in chapter 5. In Step 3, it will be proven that the regularized solutions {f_ε} have a converging subsequence in the topology of uniform convergence on compact sets. Step 4 finishes the existence proof by verifying that the subsequence’s limit satisfies the original coagulation equation. Possible improvements and future work are outlined in chapter 6.
  • Vuoksenmaa, Aleksis Ilari (2020)
    Coagulation equations are evolution equations that model the time-evolution of the size-distribution of particles in systems where colliding particles stick together, or coalesce, to form one larger particle. These equations arise in many areas of science, most prominently in aerosol physics and the study of polymers. In the former case, the colliding particles are small aerosol particles that form ever larger aerosol particles, and in the latter case, the particles are polymers of various sizes. As the system evolves, the density of particles of a specified size changes. The rate of change is specified by two competing factors. On one hand there is a positive contribution coming from smaller particles coalescing to form particles of this specific size. On the other hand, particles of this size can coalesce with other particles to form larger particles, which contributes negatively to the density of particles of this size. Furthermore, if there is no addition of new particles into the system, then the total mass of the particles should remain constant. From these considerations, it follows that the time-evolution of the coagulation equation is specified for every particle size by a difference of two terms which preserve the total mass of the system. The physical properties of the system affect the time evolution via a coagulation kernel, which determines the rate at which particles of different sizes coalesce. A variation of coagulation equations is achieved when we add an injection term to the evolution equation to account for new particles injected into the system. This results in a new evolution equation, a coagulation equation with injection, where the total mass of the system is no longer preserved, as new particles are added into the system at each point in time. Coagulation equations with injection may have non-trivial solutions that are independent of time. The existence of non-trivial stationary solutions has ramifications in aerosol physics, since these might map to observations that the particle size distribution in the air stays approximately constant. In this thesis, it will be demonstrated, following Ferreira et al. (2019), that for any good enough injection term and for suitably picked, compactly supported coagulation kernels, there exists a stationary solution to a regularized version of the coagulation equation. This theorem, which relies heavily on functional analytic tools, is a central step in the proof that certain asymptotically well-behaved kernels have stationary solutions for any prescribed compactly supported injection term.