Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by master's degree program "Master's Programme in Theoretical and Computational Methods"

Sort by: Order: Results:

  • Rasola, Miika (2020)
    Resonant inelastic X-ray scattering (RIXS) is one of the most powerful synchrotron based methods for attaining information of the electronic structure of materials. Novel ultra-brilliant X-ray sources, X-ray free electron lasers (XFEL), offer new intriguing possibilities beyond the traditional synchrotron based techniques facilitating the transition of X-ray spectroscopic methods to the nonlinear intensity regime. Such nonlinear phenomena are well known in the optical energy range, less so in X-ray energies. The transition of RIXS to the nonlinear region could have significant impact on X-ray based materials research by enabling more accurate measurements of previously observed transitions, allowing the detection of weakly coupled transitions on dilute samples and possibly uncovering completely unforeseen information or working as a platform for novel intricate methods of the future. The nonlinear RIXS or stimulated RIXS (SRIXS) on XFEL has already been demonstrated in the simplest possible proof of concept case. In this work a comprehensive introduction to SRIXS is presented from a theoretical point of view starting from the very beginning, thus making it suitable for anyone with the basic understanding of quantum mechanics and spectroscopy. To start off, the principles of many body quantum mechanics are revised and the configuration interactions method for representing molecular states is introduced. No previous familiarity with X-ray matter interaction or RIXS is required as the molecular and interaction Hamiltonians are carefully derived, based on which a thorough analysis of the traditional RIXS theory is presented. In order to stay in touch with the real world, the basic experimental facts are recapped before moving on to SRIXS. First, an intuitive picture of the nonlinear process is presented shedding some light onto the term \textit{stimulated} while introducing basic terminology and some X-ray pulse schemes along with futuristic theoretical examples of SRIXS experiments. After this, a careful derivation of the Maxwell-Liouville-von Neumann theory up to quadrupole order is presented for the first time ever. Finally, the chapter is concluded with a short analysis of the experimental status quo on XFELs and some speculation on possible transition metal samples where SRIXS in its current state could be applied to observe quadrupole transitions advancing the field remarkably.
  • Pirttikoski, Antti (2021)
    LHC is the highest energy particle collider ever built and it is employed to study elementary particles by colliding protons together. One intriguing study subject at LHC is the stability of the electroweak vacuum in our universe. The current prediction suggests that the vacuum is in the metastable state. The stability of the vacuum is dependent on the mass of the top quark, and it is possible that more precise measurement of the mass could shift the prediction to the border of the metastable and stable states. In order to measure the mass of the top quark more precisely, we need to measure the bottom (b) quarks decaying from it at high precision, as top quark decays predominantly into a W boson and a b quark. Due to the phenomenon called hadronisation, we can not measure the quarks directly, but rather as sprays of collimated particles called jets. The jets originating from b quarks (b jet) can be identified by b-tagging. The precise measurement and calibration of the b jet energy is crucial for top quark mass measurement. This thesis studies the b jets and their energy calibration at the CMS, which is one of the general purpose detectors along the LHC. Especially the b jet energy scale (bJES) is under the investigation and the various phenomena affecting to it. For example, large fraction of b jets contain neutrinos, which cannot be measured directly. This increases uncertainties related to the energy measurement. Also there are problems how precisely the formation and evolution of the b jets can be modelled by Monte Carlo event generators, such as Pythia8, which was utilized in this thesis. The aim of this thesis is to evaluate how big effect on the bJES is caused by the various different phenomena, which presumably weaken the precision of the b jet measurements. The studied phenomena are the semileptonic branching ratios of b hadrons, branching ratios of b hadron to c hadron decays, b hadron production fraction and parameterization of the b quark fragmentation function. The combined effect of all four different rescaling features mentioned above, suggests that bJES is known at 0.2% level. A small shift of -0.1% in the Missing transverse energy Projection Fraction (MPF) response scale is detected at low pt values, which vanishes as the pt increases. This improves remarkably 0.4-0.5% JES accuracy achieved during at CMS during Run 1 of the LHC. However, there are still many ways we can improve the performance presented here. Definitely there is a need for further studies of the rescaling methods before results could be utilized in the corrections of bJES to do precision measurement of the top quark mass.
  • Jääskeläinen, Matias (2020)
    This thesis is about exploring descriptors for atmospheric molecular clusters. Descriptors are needed for applying machine learning methods for molecular systems. There is a collection of descriptors readily available in the DScribe-library developed in Aalto University for custom machine learning applications. The question of which descriptors to use is up to the user to decide. This study takes the first steps in integrating machine learning into existing procedure of configurational sampling that aims to find the optimal structure for any given molecular cluster of interest. The structure selection step forms a bottleneck in the configurational sampling procedure. A new structure selection method presented in this study uses k-means clustering to find structures that are similar to each other. The clustering results can be used to discard redundant structures more effectively than before which leaves fewer structures to be calculated with more expensive computations. Altogether that speeds up the configurational sampling procedure. To aid the selection of suitable descriptor for this application, a comparison of four descriptors available in DScribe is made. A procedure for structure selection by representing atmospheric clusters with descriptors and labeling them into groups with k-means was implemented. The performance of descriptors was compared with a custom score suitable for this application, and it was found that MBTR outperforms the other descriptors. This structure selection method will be utilized in the existing configurational sampling procedure for atmospheric molecular clusters but it is not restricted to that application.
  • Pruikkonen, Sanni (2021)
    Stacking of antiaromatic molecules leads to enhanced stability and higher conductivity due to reversed antiarotmaticity. It has been shown that cyclophenes consisting of antiaromatic Ni(II) norrcorrole subunits have a vertical current-density flux between the two metal ions. The Ni(II) meso- substituted dibenzotetraaza[14]annulene complex fulfills the Hückel rule for being antiaromatic. Upon increasing the temperature above 13 K, the effective magnetic moment of solid state Ni(II) meso-substituted dibenzotetraaza[14]annulene changes from being diamagnet to paramagnetic. A suggested explanation for this is that there might be weak interaction between the Ni atoms. In this study the possibility of the existence of vertical current-density flux between the two metal ions in the Ni(II) meso-substituted dibenzotetraaza[14]annulene is investigated. In addition, the effect of the Ni and N atoms in Ni(II) 1,5,9,13-tetraaza[16]annulene was studied by replacing Ni and Zn and N with O. Electronic motion in molecules that are under the influence of a magnetic field is investigated computationally, since at present there is no routine experimental method for doing that. TURBOMOLE, the Gauge-including Magnetically Induced Currents method and Paraview were employed in this study for structure optimization of the molecules, calculation of current-density flux and current strength in the molecules and visualisation of the current-density pathways respectively. The results of this study does not show any current transport between the subunits in the Ni(II) meso-substituted dibenzotetraaza[14]annulene complex. Both the Ni(II) 1,5,9,13-tetraaza[16]annulene and the Zn(II) 1,5,9,13-tetraaza[16]annulene are aromatic but they were not stacked due to their distorted structure. The (2Z,7Z,10Z,14Z)-1,9-dioxa-5,13-diazacyclohexadeca-2,7,10,14-tetraene-5,13-diide complexes with either Zn(II) or Ni(II) were both non-aromatic as well as the Ni(II) (2Z,7Z,10Z,14Z)-1,9-dioxa-5,13-diazacyclohexadeca-2,7,10,14-tetraene-5,13-diide dimer.
  • Sassi, Sebastian (2019)
    When the standard model gauge group SU(3) × SU(2) × U(1) is extended with an extra U(1) symmetry, the resulting Abelian U(1) × U(1) symmetry introduces a new kinetic mixing term into the Lagrangian. Such double U(1) symmetries appear in various extensions of the standard model and have therefore long been of interest in theoretical physics. Recently this kinetic mixing has received attention as a model for dark matter. In this thesis, a systematic review of kinetic mixing and its physical implications is given, some of the dark matter candidates relying on kinetic mixing are considered, and experimental bounds for kinetic mixing dark matter are discussed. In particular, the process of diagonalizing the kinetic and mass terms of the Lagrangian with a suitable basis choice is discussed. A rotational ambiquity arises in the basis choice when both U(1) fields are massless, and it is shown how this can be addressed. BBN bounds for a model with a fermion in the dark sector are also given based on the most recent value of the effective number of neutrino species, and it is found that a significant portion of the FIMP regime is excluded by this constraint.
  • Jussila, Joonas (2019)
    In this thesis, the sputtering of tungsten surfaces is studied under ion irradiation using molecular dynamics simulations. The focus of this work is on the effect of surface orientation and incoming angle on tungsten sputtering yields. We present a simulation approach to simulate sputtering yields of completely random surface orientations. This allows obtaining the total sputtering yields averaged over a large number of arbitrary surface orientations, which are representative to the sputtering yield of a polycrystalline sample with random grain orientations in a statistically meaningful way. In addition, a completely different method was utilised to simulate sputtering yields of tungsten fuzz surfaces with various fuzz structure heights. We observe that the total sputtering yield of the investigated surfaces are clearly dependent on the surface orientation and the sputtering yields of average random surfaces are different compared to the results of any of the low index surfaces or their averages. The low index surfaces and the random surface sputtering yields also show a dependence of the incoming angle of the projectile ions. In addition, we calculate the outgoing angular distribution of sputtered tungsten atoms in every bombardment case, which likewise shows to be a sensitive to the surface orientation. Finally, the effect of fuzz height on the sputtering yield of tungsten fuzz surfaces is discussed. We see that tungsten fuzz significantly reduces the sputtering yield compared to a pristine tungsten surface and the effect is already seen when the fuzz pillar height is a few atom layers.
  • Mäki-Iso, Emma (2021)
    Sijoitusten markkinariskin suuruutta tarkastellaan usein riskimittojen avulla. Riskimitta on kuvaus mahdollisia tappioita kuvaavien satunnaismuuttujien joukosta reaalilukuihin. Riski- mittojen avulla erilaisten sijotusten riskillisyyttä pystytään vertailemaan helposti. Pankki- valvojat hyödyntävät riskimittoja pankkien vakavaraisuuden valvonnassa. Pisimpään ylei- sessä käytössä on ollut VaR (Value-at-Risk) niminen riskimitta. VaR kertoo suurimman tappion, joka koetaan jollain asetetulla luottamustasolla α eli se on tappiojakauman α- kvantiili. Baselin uusimassa ohjeistuksessa (Minimum capital requirements of market risk) odotettu vaje niminen riskimitta korvaa VaR-riskimitan pääomavaateen laskennassa. Odotet- tu vaje kertoo, mikä on tappion odotusarvo silloin, kun tappio on suurempi kuin VaR- riskimitan antama luku. Riskimittaa ollaan vaihtamassa, koska VaR ei ole teoreettisilta ominaisuuksiltaan yhtä hyvä kuin odotettu vaje. Tämä johtuu siitä, että VaR ei ole sub- additiivinen, mikä tarkoittaa sitä, että positioiden yhteenlaskettu riski voi olla joissain ta- pauksissa suurempi kuin yksittäisten positioiden riskien summa. Tämä johtaa siihen, että hajauttamattoman sijoitussalkun riski voi olla pienempi kuin hajautetun. Odotettu vaje-riskimitta ei kuitenkaan ole täysin ongelmaton, koska se ei ole konsistentisti pisteytyvä, mikä tarkoittaa, että sille ei ole olemassa pisteytysfunktiota, jonka avulla voi- taisiin verrata estimoituja ja toteutuneita arvoja konsistentisti. Lisäksi se, että odotetun vajeen suuruus riippuu kaikista häntään jäävistä tappioista, tekee siitä herkän hännässä olevien tappioiden virheille. Tämä ei ole kovin hyvä ominaisuus, koska tappiojakaumien häntien estimointiin liittyy paljon epävarmuutta. Koska riskien estimointiin liittyy epävarmuutta, sääntely velvoittaa pankkeja toteumates- taamaan regulatiivisen pääomavaateen laskennassa käytettyjä riskiestimaatteja. Toteuma- testaamisella tarkoitetaan prosessia, jossa estimoituja riskilukuja verrataan toteutuneisiin tappioihin. VaR-estimaattien toteumatestaus perustuu niiden päivien lukumäärälle tes- tausjaksolla, joina tappio ylittää VaR-estimaatin antaman tappiotason. Odotetulle vajeelle ei ole vielä olemassa yhtä vakiintuneita toteumatestausmenetelmiä kuin VaR-estimaateille. Tässä tutkielmassa esitellään kolme erilaista tapaa toteumatestata odotettu vaje estimaat- teja, nämä tavat esittelivät Kratz kollegoineen, Moldenhauer ja Pitera sekä Costanzino ja Curran. Menetelmissä tarkastellaan useamman VaR-tason yhtäaikaisia ylityksiä, suojatun position eli tappion ja riskiestimaatin erotuksen positiiviseen lukuun kumuloitu- vasti summautuvien havaintojen määrää ja VaR-ylityksien keskimääräistä suuruutta. Tutkielman laskennallisessa osuudessa tutkittiin antavatko VaR- ja odotettu vaje- toteumatestit samanlaisia tuloksia ja vaikuttaako riskin estimointiin käytetyn havainto- jakson pituus estimaattien suoriutumiseen toteumatesteissä. Laskelmissa havaittiin, että odotettu vaje- ja VaR-toteumatestit antoivat samanlaisia tuloksia. Markkinadatasta eri kokoisilla estimointi-ikkunoilla lasketut estimaatit saivat toteumatestissä erikokoisia tes- tisuureiden arvoja, ja hyväksyivät väärän mallin tai hylkäsivät oikean malli eri todennä- köisyyksillä. Kun käytettiin puhtaasti simuloitua dataa, eri kokoisilla estimointi-ikkunoilla laskettujen estimaattien tuloksissa ei ollut eroja. Näin voidaan päätellä, että testitulosten erot eri mittaisilla havaintojaksoilla laskettujen estimaattien välillä eivät johdu pelkästään havaintojen määrästä vaan myös laadusta.
  • Vuoksenmaa, Aleksis Ilari (2020)
    Coagulation equations are evolution equations that model the time-evolution of the size-distribution of particles in systems where colliding particles stick together, or coalesce, to form one larger particle. These equations arise in many areas of science, most prominently in aerosol physics and the study of polymers. In the former case, the colliding particles are small aerosol particles that form ever larger aerosol particles, and in the latter case, the particles are polymers of various sizes. As the system evolves, the density of particles of a specified size changes. The rate of change is specified by two competing factors. On one hand there is a positive contribution coming from smaller particles coalescing to form particles of this specific size. On the other hand, particles of this size can coalesce with other particles to form larger particles, which contributes negatively to the density of particles of this size. Furthermore, if there is no addition of new particles into the system, then the total mass of the particles should remain constant. From these considerations, it follows that the time-evolution of the coagulation equation is specified for every particle size by a difference of two terms which preserve the total mass of the system. The physical properties of the system affect the time evolution via a coagulation kernel, which determines the rate at which particles of different sizes coalesce. A variation of coagulation equations is achieved when we add an injection term to the evolution equation to account for new particles injected into the system. This results in a new evolution equation, a coagulation equation with injection, where the total mass of the system is no longer preserved, as new particles are added into the system at each point in time. Coagulation equations with injection may have non-trivial solutions that are independent of time. The existence of non-trivial stationary solutions has ramifications in aerosol physics, since these might map to observations that the particle size distribution in the air stays approximately constant. In this thesis, it will be demonstrated, following Ferreira et al. (2019), that for any good enough injection term and for suitably picked, compactly supported coagulation kernels, there exists a stationary solution to a regularized version of the coagulation equation. This theorem, which relies heavily on functional analytic tools, is a central step in the proof that certain asymptotically well-behaved kernels have stationary solutions for any prescribed compactly supported injection term.
  • Pelttari, Hannu (2020)
    Federated learning is a method to train a machine learning model on multiple remote datasets without the need to gather the data from the remote sites to a central location. In healthcare, gathering the data from different hospitals into a central location can be a difficult and time-consuming task, due to privacy concerns and regulations regarding the use of sensitive data, making federated learning an attractive alternative to more traditional methods. This thesis adapted an existing federated gradient boosting model and developed a new federated random forest model and applied them to mortality prediction in intensive care units. The results were then compared to the centralized counterparts of the models. The results showed that while the federated models did not perform as well as the centralized models on a similar sized dataset, the federated random forest model can achieve superior performance when trained on multiple hospitals' data compared to centralized models trained on a single hospital. In scenarios where the centralized models had data from multiple hospitals the federated models could not perform as well as the centralized models. It was also found that the performance of the centralized models could not be improved with further federated training. In addition to practical advantages such as possibility of parallel or asynchronous training without modifications to the algorithm, the federated random forest performed better in all scenarios compared to the federated gradient boosting. The performance of the federated random forest was also found to be more consistent over different scenarios than the performance of federated gradient boosting, which was highly dependent on factors such as the order with the hospitals were traversed.
  • Niedermeier, Marcel (2021)
    Matrix product states provide an efficient parametrisation of low-entanglement many-body quantum states. In this thesis, the underlying theory is developed from scratch, requiring only basic notions of quantum mechanics and quantum information theory. A full introduction to matrix product state algebra and matrix product operators is given, culminating in the derivation of the density matrix renormalisation group algorithm. The latter provides a simple variational scheme to determine the ground state of arbitrary one-dimensional many-body quantum systems with supreme precision. As an application of matrix-product state technology, the kernel polynomial method is introduced in detail as a state-of-the art numerical tool to find the spectral function or the dynamical correlator of a given quantum system. This in turn gives access to the elementary excitations of the system, such that the locations of the low-energy eigenstates can be studied directly in real space. To illustrate those theoretical tools concretely, the ground state energy, the entanglement entropy and the elementary excitations of a simple interface model of a Heisenberg ferromagnet and a Heisenberg antiferromagnet are studied. By changing the location of the model in parameter space, the dependence of the above-mentioned quantities on the transverse field and the coupling strength is investigated. Most notably, we find that the entanglement entropy characteristic to the antiferromagnetic ground state stretches across the interface into the ferromagnetic half-chain. The dependence of the physics on the value of the coupling strength is, overall, small, with exception of the appearance of a boundary mode whose eigenenergy grows with the coupling. A comparison with a localised edge field shows however that the boundary mode is a true interaction effect of the two half-chains. Various algorithmic and physics extensions of the present project are discussed, such that the code written as part of this thesis could be turned into a state-of-the-art MPS library with managable effort. In particular, an application of the kernel polynomial method to calculate finite-temperature correlators is derived in detail.
  • Lankinen, Juhana (2020)
    Due to the unique properties of foams, they can be found in many different applications in a wide variety of fields. The study of foams is also useful for the many properties they share with other phenomena, like impurities in cooling metals, where the impurities coarsen similarly to bubbles in foams. For these and other reasons foams have been studied extensively for over a hundred years and continue being an interesting area of study today due to new insights in both experimental and theoretical work and new applications waiting to be used and realized in different industries. The most impactful early work in the study of the properties of foams was done in the late 1800s by Plateau. His work was extended in the early to mid-1900s by Lifshitz, Slyozov, Wagner and von Neumann and by many more authors in recent years. The early work was mostly experimental or theoretical in the sense of performing mathematical calculations on paper, while the modern methods of study have kept the experimental part -- with more refined methods of measurement of course -- but shifted towards the implementation of the theory as simulations instead of solving problems on paper. In the early 90s Durian proposed a new method for simulating the mechanics of wet foams, based on repulsive spring-like forces between neighboring bubbles. This model was later extended to allow for the coarsening of the foam, and a slightly changed version of this model has been implemented in the code presented in this thesis. As foams consist of a very large number of bubbles, it is important to be able to simulate sufficiently large systems to realistically study the physics of foams. Very large systems have traditionally been too slow to simulate on the individual bubble level in the past, but thanks to the popularity of computer games and the continuous demand for better graphics in games, the graphics processing units have become very powerful and can nowadays be used to do highly parallel general computing. In this thesis, a modified version of Durian's wet foam model that runs on the GPU is presented. The code has been implemented in modern C++ using Nvidia's CUDA on the GPU. Using this program first a typical two-dimensional foam is simulated with 100000 bubbles. It is found that the simulation code replicates the expected behaviour for this kind of foam. After this, a more detailed analysis is done of a novel phenomenon of the separation of liquid and gas phases in low gas fraction foams that arises only with sufficiently large system sizes. It is found that the phase separation causes the foam to evolve as would a foam of higher gas fraction until the phases have mixed back together. It is hypothesized that the reason causing the phase separation is related to uneven energy distribution in the foam, which itself is related to jamming and uneven distribution of the sizes of the bubbles in the foam.
  • Polus, Aku (2021)
    We begin by discussing the essential concepts within the standard cosmology where the dark matter is "cold" and collisionless. We consider the structure formation in the dark matter component and present problems faced by the standard cosmology as well as some prospects for the solutions to those. The main problem considered in this work is the tension in the value of the Hubble constant measured with different procedures. We present the theories behind the procedures, and conclude the study of the tension by considering the most notable interpretations for the reason behind it. We then set up a proposal for an alternative model describing the dark sector. It is a hidden copy of the visible sector electromagnetism, allowing for a radiative cooling in virializing structures. By assuming first an asymmetric particle content, we study which scales of the dark matter halos are eligible to collapse into dense structure. Acquiring a mass function then allows to conclude how much from the total dark matter component is expected to collapse. If instead the dark matter particle content is taken to be symmetric, the collapsed fraction is assumed to annihilate into dark radiation. With certain modifications to the freely available Boltzmann code CAMB, we construct to the code a representation of the cosmology defined by our model. Lastly we use the modified cosmology to create a fit to the data defining the Hubble constant, and see for the relief of the tension. We find that our model provides a reasonable history for the energy content of the universe, and a notable relief to the Hubble tension, although the improvement is only a minor one compared to some more modest modifications to the cosmology.
  • Seshadri, Sangita (2020)
    Blurring is a common phenomenon during image formation due to various factors like motion between the camera and the object, or atmospheric turbulence, or when the camera fails to have the object in focus, which leads to degradation in the image formation process. This leads to the pixels interacting with the neighboring ones, and the captured image is blurry as a result. This interaction with the neighboring pixels, is the 'spread' which is represented by the Point Spread Function. Image deblurring has many applications, for example in Astronomy, medical imaging, where extracting the exact image required might not be possible due to various limiting factors, and what we get is a deformed image. In such cases, it is necessary to use an apt deblurring algorithm keeping all necessary factors like performance and time in mind. This thesis analyzes the performance of learning and analytical methods in Image deblurring Algorithm. Inverse problems would be discussed at first, and how ill posed inverse problems like image deblurring cannot be tackled by naive deconvolution. This is followed by looking at the need for regularization, and how it is necessary to control the fluctuations resulting from extreme sensitivity to noise. The Image reconstruction problem has the form of a convex variational problem, and its prior knowledge acting as the inequality constraints which creates a feasible region for the optimal solution. Interior point methods iterates over and over within this feasible region. This thesis uses the iRestNet Method, which uses the Forward Backward iterative approach for the Machine learning algorithm, and Total Variation approach implemented using the FlexBox tool for analytical method, which uses the Primal Dual approach. The performance is measured using SSIM indices for a range of kernels, the SSIM map is also analyzed for comparing the deblurring efficiency.
  • Besel, Vitus (2020)
    We investigated the impact of various parameters on new particle formation rates predicted for the sulfuric acid - ammonia system using cluster distribution dynamics simulations, in our case ACDC (Atmospheric Cluster Dynamics Code). The predicted particle formation rates increase significantly if rotational symmetry number of monomers (sulfuric acid and ammonia molecules, and bisulfate and ammonium ions) are considered in the simulation. On the other hand, inclusion of the rotational symmetry number of the clusters only changes the results slightly, and only in conditions where charged clusters dominate the particle formation rate because most of the clusters stable enough to participate in new particle formation display no symmetry, therefore have a rotational symmetry number of one, and the few exceptions to this rule are positively charged. Further, we tested the influence of the application of a quasi-harmonic correction for low-frequency vibrational modes. Generally, this decreases predicted new particle formation rates, and significantly alters the shape of the formation rate curve plotted against the sulfuric acid concentration. We found that the impact of the maximum size of the clusters explicitly included in the simulations depends on the simulated conditions and the errors due to the limited set of clusters simulated generally increase with temperature, and decrease with vapor concentrations. The boundary conditions for clusters that are counted as formed particles (outgrowing clusters) have only a small influence on the results, provided that the definition is chemically reasonable and the set of simulated clusters is sufficiently large. We compared predicted particle formation rates with experimental data measured at the CLOUD (Cosmics Leaving OUtdoor Droplets) chamber. A cluster distribution dynamics model shows improved agreement with experiments when using our new input data and the proposed combination of symmetry and quasi-harmonic corrections., compared to an earlier study based on older quantum chemical data.
  • Ihalainen, Olli (2019)
    The Earth’s Bond albedo is the fraction of total reflected radiative flux emerging from the Earth’s Top of the Atmosphere (ToA) to the incident solar radiation. As such, it is a crucial component in modeling the Earth’s climate. This thesis presents a novel method for estimating the Earth’s Bond albedo, utilising the dynamical effects of Earth radiation pressure on satellite orbits that are directly related to the Bond albedo. Where current methods for estimating the outgoing reflected radiation are based on point measurements of the radiance reflected by the Earth taken in the proximity of the planet, the new method presented in this thesis makes use of the fact that Global Positioning Satellites (GPS) together view the entirety of the ToA surface. The theoretical groundwork is laid for this new method starting from the basic principles of light scattering, satellite dynamics, and Bayesian inference. The feasibility of the method is studied numerically using synthetic data generated from real measurements of GPS satellite orbital elements and the imaging data from the Earth Polychromatic Imaging Camera (EPIC) aboard the Deep Space Climate Observatory (DSCOVR) spacecraft. The numerical methods section introduces the methods used for forward modeling the ToA outgoing radiation, the Runge-Kutta method for integrating the satellite orbits and the virtual-observation Markov-chain Monte Carlo methods used for solving the inverse problem. The section also describes a simple clustering method used for classifying the ToA from EPIC images. The inverse problem was studied with very simple models for the ToA, the satellites, and the satellite dynamics. These initial results were promising as the inverse problem algorithm was able to accurately estimate the Bond albedo. Further study of the method is required to determine how the inverse problem algorithm works when more realism is added to the models.
  • Kupiainen, Tomi (2020)
    In this work we consider the method of unitarily inequivalent representations in the context of Majorana neutrinos and a simple seesaw model. In addition, the field theoretical framework of neutrino physics, namely that of QFT and the SM, is reviewed. The oscillating neutrino states are expressed via suitable quantum operators acting on the physical vacuum of the theory, which provides further insight to the phenomenological flavor state ansatz made in the standard formulation of neutrino oscillations. We confirm that this method agrees with known results in the ultrarelativistic approximation while extending them to the non-relativistic region.
  • Duevski, Teodor (2019)
    In this thesis we model the term structure of zero-coupon bonds. Firstly, in the static setting by norm optimization Hilbert space techniques and starting from a set of benchmark fixed income instruments, we obtain a closed from expression for a smooth discount curve. Moving on to the dynamic setting, we describe the stochastic modeling of the fixed income market. Finally, we introduce the Heath-Jarrow-Morton (HJM) methodology. We derive the evolution of zero-coupon bond prices implied by the HJM methodology and prove the HJM drift condition for non arbitrage pricing in the fixed income market under a dynamic setting. Knowing the current discount curve is crucial for pricing and hedging fixed income securities as it is a basic input to the HJM valuation methodology. Starting from the non arbitrage prices of a set of benchmark fixed income instruments, we find a smooth discount curve which perfectly reproduces the current market quotes by minimizing a suitably defined norm related to the flatness of the forward curve. The regularity of the discount curve estimated makes it suitable for use as an input in the HJM methodlogy. This thesis includes a self-contained introduction to the mathematical modeling of the most commonly traded fixed income securities. In addition, we present the mathematical background necessary for modeling the fixed income market in a dynamic setting. Some familiarity with analysis, basic probability theory and functional analysis is assumed.
  • Mukkula, Olli (2024)
    Quantum computers utilize qubits to store and process quantum information. In superconducting quantum computers, qubits are implemented as quantum superconducting resonant circuits. The circuits are operated only at the two energy states, which form the computational basis for the qubit. To suppress leakage to uncomputational states, superconducting qubits are designed to be anharmonic oscillators, which is achieved using one or more Josephson junctions, a nonlinear superconducting element. One of the main challenges in developing quantum computers is minimizing the decoherence caused by environmental noise. Decoherence is characterized by two coherence times, T1 for depolarization processes and T2 for dephasing. This thesis reviews and investigates the decoherence properties of superconducting qubits. The main goal of the thesis is to analyze the tradeoff between anharmonicity and dephasing in a qubit unimon. Recently developed unimon incorporates a single Josephson junction shunted by a linear inductor and a capacitor. Unimon is tunable by external magnetic flux, and at the half flux quantum bias, the Josephson energy is partially canceled by the inductive energy, allowing unimon to have relatively high anharmonicity while remaining fully protected against low-frequency charge noise. In addition, at the sweet spot with respect to the magnetic flux, unimon becomes immune to first-order perturbations in the flux. The sweet spot, however, is relatively narrow, making unimon susceptible to dephasing through the quadratic coupling to the flux noise. In the first chapter of this thesis, we present a comprehensive look into the basic theory of superconducting qubits, starting with two-state quantum systems, followed by superconductivity and superconducting circuit elements, and finally combining these two by introducing circuit quantum electrodynamics (cQED), a framework for building superconducting qubits. We follow with a theoretical discussion of decoherence in two-state quantum systems, described by the Bloch-Redfield formalism. We continue the discussion by estimating decoherence using perturbation theory, with special care put into the dephasing due to the low-frequency 1/f noise. Finally, we review the theoretical model of unimon, which is used in the numerical analysis. As a main result of this thesis, we suggest a design parameter regime for unimon, which gives the best ratio between anharmonicity and T2.
  • Kejzar, Nejc (2020)
    AMPA receptors (AMPARs) are the most numerous synaptic receptors in the hippocampus. Here they take one of the central roles in the expression of long-term potentiation (LTP), the molecular mechanism underlying learning and memory. They belong to the group of glutamate-gated ion channels and have a structure characterized by 4 discrete domains. While the functional roles of C-terminal (CTD), transmembrane (TMD) and ligand-binding (LBD) domains have largely been established, the regulatory capacity - if any - of the N-terminal domain (NTD) remains questionable. In this thesis we used molecular dynamics (MD) simulations to show directly for the first time that AMPA receptor NTD domain can respond to the pH of the surrounding medium. Specifically, we identified a pair of histidine residues in the NTD interface, which are capable of acting as pH sensors - upon acidification of the environment the two histidines become protonated and through electrostatic repulsion destabilize the NTD interface. If experimentally validated, this model could provide a mechanistic explanation of AMPAR clustering in synapses. Due to low affinity for glutamate under physiological conditions, it has been proposed that AMPARs form clusters right underneath glutamate release sites in order to produce sufficiently large postsynaptic depolarizations. Since the lumen of glutamate vesicles is acidic, the presynaptic glutamate release is coupled to transient acidification of the synaptic environment. In our model this acidification is detected by identified interface histidines, which upon protonation cause structural rearrangement of the NTD interface. This rearrangement could lead to formation of interactions either with other AMPARs or synaptic anchor proteins (such as PSD-95), resulting in AMPAR clustering underneath glutamate release sites.
  • Takko, Heli (2021)
    Quantum entanglement is one of the biggest mysteries in physics. In gauge field theories, the amount of entanglement can be measured with certain quantities. For an entangled system, there are correlations with these measured quantities in both time and spatial coordinates that do not fit into the understanding we currently hold about the locality of the measures and correlations. Difficulties in obtaining probes for entanglement in gauge theories arise from the problem of nonlocality. It can be stated as the problem of decomposing the space of the physical states into different regions. In this thesis, we focus on a particular supersymmetric Yang-Mills theory that is holographically dual to a classical gravity theory in an asymptotically anti de Sitter spacetime. We introduce the most important holographic probes of entanglement and discuss the inequalities obtained from the dual formulation of the entanglement entropy. We introduce the subregion duality as an interesting conjecture of holography that remains under research. The understanding of the subregion duality is not necessarily solid in arbitrary geometries, as new results that suggest either a violation of the subregion duality or act against our common knowledge of the holography by reconstructing the bulk metric beyond the entanglement wedge. This thesis will investigate this aspect of subregion duality by evaluating the bulk probes such as Wilson loop for two different geometries (deconfining and confining). We aim to find whether or not these probes remain inside of the entanglement wedge. We find that, for both geometries in four dimensions, the subregion duality is not violated. In other words, the reduced CFT state does not encode information about the bulk beyond the entanglement wedge. However, we can not assume this is the case with arbitrary geometries and therefore, this topic will remain under our interest for future research.