Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by master's degree program "Teoreettisten ja laskennallisten menetelmien maisteriohjelma (Theoretical Calculation Methods)"

Sort by: Order: Results:

  • Kotipalo, Leo (2023)
    Simulating space plasma on a global scale is computationally demanding due to the system size involved. Modeling regions with variable resolution depending on physical behavior can save computational resources without compromising too much on simulation accuracy. This thesis examines adaptive mesh refinement as a method of optimizing Vlasiator, a global hybrid-Vlasov plasma simulation. Behavior of plasma near the Earth's magnetosphere and different characteristic scales that need to be considered in simulation are introduced. Kinetic models using statistical methods and fluid methods are examined. Modeling electrons kinetically requires resolutions orders of magnitude finer than ions, so in Vlasiator ions are modeled kinetically and electrons as a fluid. This allows for lighter simulation while preserving some kinetic effects. Mesh refinement used in Vlasiator is introduced as a method to save memory and computational work. Due to the structure of the magnetosphere, resolution isn't uniform in the simulation domain, with particularly the tail regions and magnetopause having rapid spatial changes compared to the relatively uniform solar wind. The region to refine is parametrized and static throughout a simulation run. Adaptive mesh refinement based on the simulation data is introduced as an evolution of this method. This provides several benefits: more rigorous optimization of refinement regions, easier reparametrization for different conditions, following dynamic structures and saving computation time in initialization. Refinement is done based on two indices measuring the spatial rate of change of relevant variables and reconnection respectively. The grid is re-refined at set intervals as the simulation runs. Tests similar to production runs show adaptive refinement to be an efficient replacement for static refinement. Refinement parameters produce results similar to the static method, while giving somewhat different refinement regions. Performance is in line with static refinement, and refinement overhead is minor. Further avenues of development are presented, including dynamic refinement intervals.
  • Sukuvaara, Satumaaria (2023)
    Many beyond the Standard Model theories include a first order phase transition in the early universe. A phase transition of this kind is presumed to be able to source gravitational waves that might be be observed with future detectors, such as the Laser Interferometer Space Antenna. A first order phase transition from a symmetric (metastable) minimum to the broken (stable) one causes the nucleation of broken phase bubbles. These bubbles expand and then collide. It is of importance to examine how the bubbles collide in depth, as the events during the collision affect the gravitational wave spectrum. We assume the field to interact very weakly or not at all with the particle fluid in the early universe. The universe also experiences fluctuations due to thermal or quantum effects. We look into how these background fluctuations affect the field evolution and bubble collisions during the phase transition in O(N) scalar field theory. Specifically, we numerically simulate two colliding bubbles nucleated on top of the background fluctuations, with the field being a N-dimensional vector in the O(N) group. Due to the symmetries present, the system can be examined in cylindrical coordinates, lowering the number of simulated spatial dimensions. In this thesis, we perform the calculation of initial state fluctuations and simulate them and two bubbles numerically. We present results of the simulation of the field, concentrating on the effects of fluctuations on the O(N) scalar field theory.
  • Stirling, Nico Toivo (2023)
    In this thesis a computation of the non-perturbative Lorentzian graviton propagator, which has appeared in the literature, is outlined. Firstly, the necessary ingredients for the computation are introduced and discussed. This includes; General Relativity (GR), its path integral quantisation around a Minkowski space background, and the definition of the graviton propagator along with its relation to the one-particle-irreducible (1PI) graviton 2-point function. A brief discussion on the perturbative non-renormalizability of the theory is followed by the introduction of the functional renormalization group (fRG) equation from which a fRG equation for the scalar coefficient function of the transverse-traceless (TT) 1PI graviton 2-point function is derived. After these ingredients have been introduced we proceed to outline the computation in question, skipping the details of its most involved steps. The computation starts by defining the spectral function and the Källén-Lehmann spectral representation of propagators. The non-perturbative TT 1PI graviton 2-point function, the propagators and the spectral functions, are parameterized and the fRG flow equation for the TT 1PI graviton 2-point function is used together with certain renormalization conditions to define renormalization group (RG) flow equations for these parameters. The solution of the flow of the parameters is displayed and is used to construct the graviton spectral function and the graviton propagator, which are both displayed graphically. Finally, a discussion of the features of the spectral function and propagator are given, and these results are briefly discussed in the context of the asymptotic safety program for quantum gravity and some of its open issues.
  • Mäkelä, Noora (2022)
    Sum-product networks (SPN) are graphical models capable of handling large amount of multi- dimensional data. Unlike many other graphical models, SPNs are tractable if certain structural requirements are fulfilled; a model is called tractable if probabilistic inference can be performed in a polynomial time with respect to the size of the model. The learning of SPNs can be separated into two modes, parameter and structure learning. Many earlier approaches to SPN learning have treated the two modes as separate, but it has been found that by alternating between these two modes, good results can be achieved. One example of this kind of algorithm was presented by Trapp et al. in an article Bayesian Learning of Sum-Product Networks (NeurIPS, 2019). This thesis discusses SPNs and a Bayesian learning algorithm developed based on the earlier men- tioned algorithm, differing in some of the used methods. The algorithm by Trapp et al. uses Gibbs sampling in the parameter learning phase, whereas here Metropolis-Hasting MCMC is used. The algorithm developed for this thesis was used in two experiments, with a small and simple SPN and with a larger and more complex SPN. Also, the effect of the data set size and the complexity of the data was explored. The results were compared to the results got from running the original algorithm developed by Trapp et al. The results show that having more data in the learning phase makes the results more accurate as it is easier for the model to spot patterns from a larger set of data. It was also shown that the model was able to learn the parameters in the experiments if the data were simple enough, in other words, if the dimensions of the data contained only one distribution per dimension. In the case of more complex data, where there were multiple distributions per dimension, the struggle of the computation was seen from the results.
  • Kähärä, Jaakko (2022)
    We study the properties of flat band states of bosons and their potential for all-optical switching. Flat bands are dispersionless energy bands found in certain lattice structures. The corresponding eigenstates, called flat band states, have the unique property of being localized to a small region of the lattice. High sensitivity of flat band lattices to the effects of interactions could make them suitable for fast, energy efficient switching. We use the Bose-Hubbard model and computational methods to study multi-boson systems by simulating the time-evolution of the particle states and computing the particle currents. As the systems were small, fewer than ten bosons, the results could be computed exactly. This was done by solving the eigenstates of the system Hamiltonian using exact diagonalization. We focus on a finite-length sawtooth lattice, first simulating weakly interacting bosons initially in a flat band state. Particle current is shown to typically increase linearly with interaction strength. However, fine-tuning the hopping amplitudes and boundary potentials, particle current through the lattice is highly suppressed. We use this property to construct a switch which is turned on by pumping the input with control photons. Inclusion of particle interactions disrupts the system, resulting in a large non-linear increase in particle current. We find that certain flat band lattices could be used as medium for an optical switch capable of controlling the transport of individual photons. In practice, highly optically nonlinear materials are required to reduce the switching time which is found to be inversely proportional to the interaction strength.
  • Haavisto, Mikael (2024)
    Characteristicclassesofferanindispensablefamilyoftopological invariantsinthetheoryofvector bundles, appearing inthe intersectionofdifferential geometry, algebraic topology, andalgebraic geometry. Asthemainfocusof thisworkaretwofundamental setsof characteristicclasses: the ChernclassesandtheCherncharacterclasses. Thetwosetsascharacteristicclassesof complex vectorbundlesareabundantlyencounteredinvariousmathematical contexts. Furthermore,both enjoyaxiomaticdefinitionswhichcharacterizethemuniquelyascohomologyclasses. Forvectorbundlesoversmoothmanifolds,Chern–Weil theoryisanentirelygeometricframeworkin concretelyreproducingcharacteristiccohomologyclasses.ThefundamentalconstructioninChern WeiltheoryforvectorbundlesistheChern–Weilhomomorphism.Thisisanalgebrahomomorphism mappingelementsofthealgebraofinvariantpolynomialsongl(n,C)todeRhamcohomologyclasses onthebasemanifoldof avectorbundle, obtainedfromevaluatingthe invariantpolynomialson localcurvatureformsassociatedtoaconnection. Inthiswork,wereviewthenecessarybackgroundinordertodefinetheaforementionedmapprop erly, andwebrieflystudy its implications. Thecontentof thework includesanintroductionto smoothfiberbundles, thebasicsof thenotionsof connectionsandcurvatureonvectorbundles, aswell as the theoryof thealgebraof invariantpolynomials ongl(n,C). After this, thework finallyculminates intheChern–Weilconstructionanditsconsequences.Morespecifically,wewill verifythatthecharacteristicclassesobtainedviatheChern–Weilmapsatisfyageneralaxiomatic definitionthatisindependentofthegeometricapproachchosenhere.Afterwards,theremainingdis cussiontreatssomebasiccharacteristicsandapplicationsoftheChernclasses,aswellastheChern characterasanaturalringhomomorphismfromtopologicalK-theorytodeRhamcohomology.
  • Nurmela, Mika (2022)
    We study a system of cold high-density matter consisting purely of quarks and gluons. The mathematical construction of Quantum Chromodynamics (QCD) introduces interactions between the fields, which modify the thermodynamic properties of the system. In the presence of interactions, we can not solve the thermodynamic properties of the system analytically. The method is to expand the result in a series in terms of the QCD coupling constant. This is referred to as the perturbation theory in the context of thermal field theory (TFT). The coupling constant describes the strength of the interaction. We introduce the basic calculation methods used in the QCD and the TFTs in general. We will also include the chemical potential associated with the number of quarks in the system in the calculation. In the case of zero temperature, quarks form a Fermi-sphere such that energy states lower than the chemical potential will be Pauli blocked. The resulting fermionic momentum integrals are modified as a consequence. We can split these integrals into two parts, referred to as the vacuum and matter parts. We can split the calculation of the pressure into two distinct contributions: one from skeleton diagrams and one from ring diagrams. The ring diagrams have unphysical IR divergences that we can not cancel using the counterterms. This is why hard thermal loop (HTL) effective field theory (EFT) is introduced. We will discuss this HTL framework, which requires the computation of the matter part of the gluon polarization tensor, which we will also evaluate in this thesis.
  • Laurila, Sara (2023)
    Certain topological phases of matter exhibit low-energy quasiparticles that closely resemble relativistic Weyl fermions due to their linear dispersion. This notion leads to a quasirelativistic description for these non-relativistic condensed matter quasiparticles. In relativistic quantum field theory, Weyl fermions are subject to chiral anomalies when coupled to gauge fields or non-trivial background geometries. Condensed matter Weyl quasiparticles similarly experience anomalies from their background fields, leading to anomalous transport phenomena. We review the field theory of relativistic fermions in curved spacetimes with torsion, and the macroscopic BCS theory of superconductors and superfluids. Using the example of p+ip-paired superfluids and superconductors, we show how their gapless excitations are quasirelativistic Weyl fermions in an emergent spacetime determined by their background fields. With a simple Landau level argument, we then argue that the presence of torsion in this emergent spacetime leads to a chiral anomaly for the Weyl quasiparticles. In the context of relativistic theory, the torsional contribution to the chiral anomaly is controversial, not least because it depends on a non-universal UV cut-off. The Landau level calculation presented here is also ambiguous for relativistic Weyl fermions. However, as we will show, the quasirelativistic approximation we use and the properties of the underlying superfluid or superconductor lead to a natural cut-off for the quasiparticle anomaly. We match this emergent torsional anomaly to the hydrodynamic anomaly in the p+ip-superfluid 3He-A.
  • Vuojamo, Joonas (2022)
    Topological defects and solitons are nontrivial topological structures that can manifest as robust, nontrivial configurations of a physical field, and appear in many branches of physics, including condensed matter physics, quantum computing, and particle physics. A fruitful testbed for experimenting with these fascinating structures is provided by dilute Bose–Einstein condensates. Bose–Einstein condensation was first predicted in 1925, and Bose–Einstein condensation was finally achieved in a dilute atomic gas for the first time in 1995 in a breakthrough experiment. Since then, the study of Bose–Einstein condensates has expanded to the study of a variety of nontrivial topological structures in condensates of various atomic species. Bose–Einstein condensates with internal spin degrees of freedom may accommodate an especially rich variety of topological structures. Spinor condensates realized in optically trapped ultracold alkali atom gases can be conveniently controlled by external fields and afford an accurate mean-field description. In this thesis, we study the creation and evolution of a monopole-antimonopole pair in such a spin-1 Bose–Einstein condensate by numerically solving the Gross–Pitaevskii equation. The creation of Dirac monopole-antimonopole pairs in a spin-1 Bose–Einstein condensate was numerically demonstrated and a method for their creation was proposed in an earlier study. Our numerical results demonstrate that the proposed creation method can be used to create a pair of isolated monopoles with opposite topological charges in a spin-1 Bose–Einstein condensate. We found that the monopole-antimonopole pair created in the polar phase of the spin-1 condensate is unstable against decay into a pair of Alice rings with oscillating radii. As a result of a rapid polar-to-ferromagnetic transition, these Alice rings were observed to decay by expanding on a short timescale.
  • Sirkiä, Topi (2023)
    The QCD axion arises as a necessary consequence of the popular Peccei-Quinn solution to the strong CP problem in particle physics. The axion turns out to very naturally possesses all the usual qualities of a good dark matter (DM) candidate. Having the potential to solve two major problems in particle cosmology in one fell swoop makes the axion a very attractive prospect. In recent years, the weakening of the traditional WIMP dark matter paradigm and axion search experiments just beginning to reach the sensitivities required to look for the QCD axion have further increased interest in axion physics. In this thesis, the basics of axion physics are reviewed, and an in-depth exposition of common direct detection experiments and astrophysical and laboratory limits is given. Particular emphasis is placed on direct detection by using the axion-photon coupling as it is the only coupling in which experimental sensitivity is enough to probe the QCD axion. The benchmark experiments of light-shining-through-wall (LSTW), helioscopes and cavity haloscopes are given a thorough theoretical treatment. Other couplings and related experiments are relevant when looking for axion-like particles (ALPs), which are postulated by various extensions of the Standard Model but which do not solve the strong CP problem. A general overview of the prevalent ALP-searches is given. Most of the described experimental setups, with some exceptions, are actually searches for very general weakly interacting particles, WISPs, with a certain coupling. The searches are thus well motivated regardless of the future standing of the QCD axion. A chapter is dedicated to axion dark matter and its creation mechanisms, in particular the misalignment mechanism. Two scenarios are mapped out, depending on whether the Peccei-Quinn symmetry spontaneously breaks before or after inflation. Both cases have experimental implications, which are compared. These considerations motivate an axion dark matter window which should be prioritized by experiments. A significant part of this thesis is dedicated to mapping out the experimental landscape of axions today. The up-to-date astrophysical and laboratory limits on the most prominent axion couplings along with projections of some near-future experiments are compiled into a set of exclusion plots.
  • Vihko, Sami Vihko (2022)
    We will review techniques of perturbative thermal quantum chromodynamics (QCD) in the imaginary-time formalism (ITF). The Infrared (IR)-problems arising from the perturbative treatment of equilibrium thermodynamics of QCD and their phenomenological causes will be investigated in detail. We will also discuss the construction of two effective field theory (EFT) frameworks most often used in modern high precision calculations to overcome these. The EFTs are the dimensionally reduced theories EQCD and MQCD and Hard thermal loop effective theory (HTL). EQCD is three-dimensional Euclidean Yang-Mills theory coupled to an adjoint scalar field and MQCD is three-dimensional Euclidean pure Yang-Mills theory. The effective parameters in these theories are determined through matching calculations. HTL is based on resummation of hard thermal loops and uses effective propagators and vertex functions. We will also discuss the determination of the pressure of QCD perturbatively. In general, this thesis details calculations and the methodology.
  • Åström, Hugo (2022)
    I discuss recent work regarding electronic structure calculations on quantum computers. I introduce quantum computing and electronic structure theory, and then discuss different mappings from electrons and excitation operators, to qubits and unitary operators, mainly Jordan–Wigner and Bravyi–Kitaev. I discuss adiabatic quantum computing in connection to state preparation on quantum computers. I introduce the most important algorithms in the field, namely, quantum phase estimation (QPE) and variational quantum eigensolver (VQE). I also mention recent modifications and improvements to these algorithms. Then I take a detour to discuss noise and quantum operations, a model for understanding how quantum computations fail because of noise from the environment. Because of this noise, quantum simulators have risen as a tool for understanding quantum computers and I have used such simulators to do electronic structure calculations on small atoms. The algorithm I have used, QPE, yields the exact result within the employed basis. As a basis I use numerical orbitals, which are very robust due to their flexibility.
  • Pirnes, Sakari (2023)
    The Smoluchowski coagulation equation is considered to be one of the most fundamental equations of the classical description of matter alongside with the Boltzman, Navier-Stokes and Euler equations. It has applications from physical chemistry to astronomy. In this thesis, a new existence result of measure valued solutions to the coagulation equation is proven. The proven existence result is stronger and more general than a previously claimed result. The proven result holds for a generic class of coagulation kernels, including various kernels used in applications. The coagulation equation models binary coagulation of objects characterized by a strictly positive real number called size, which often represents mass or volume in applications. In binary coagulation, two objects can merge together with a rate characterized by the so-called coagulation kernel. Time evolution of the size distribution is given by the coagulation equation. Traditionally the coagulation equation has two forms, discrete and continuous, which are referring to whether the objects sizes can take discrete or continuous values. A similar existence result to the one proven in this thesis has been obtained for the continuous coagulation equation, while the discrete coagulation equation is often favored in applications. Being able to study both discrete and continuous systems and their mixtures at the same time has motivated the study of measure valued solutions to the coagulation equation. After motivating the existence result proven in this thesis, its proof is organized into four Steps described at the end of the introduction. The needed mathematical tools and their connection to the four Steps are presented in chapter 2. The precise mathematical statement of the existence result is given in chapter 3 together with Step 1, where the coagulation equation will be regularized using a parameter ε ∈ (0, 1) into a more manageable regularized coagulation equation. Step 2 is done in chapter 4 and it consists of proving existence and uniqueness of a solution f_ε for each regularized coagulation equation. Step 3 and Step 4 are done in chapter 5. In Step 3, it will be proven that the regularized solutions {f_ε} have a converging subsequence in the topology of uniform convergence on compact sets. Step 4 finishes the existence proof by verifying that the subsequence’s limit satisfies the original coagulation equation. Possible improvements and future work are outlined in chapter 6.
  • Korhonen, Keijo (2022)
    The variational quantum eigensolver (VQE) is one of the most promising proposals for a hybrid quantum-classical algorithm made to take advantage of near-term quantum computers. With the VQE it is possible to find ground state properties of various of molecules, a task which many classical algorithms have been developed for, but either become too inaccurate or too resource-intensive especially for so called strongly correlated problems. The advantage of the VQE comes in the ability of a quantum computer to represent a complex system with fewer so-called qubits than a classical computer would with bits, thus making the simulation of large molecules possible. One of the major bottlenecks for the VQE to become viable for simulating large molecules however, is the scaling of the number of measurements necessary to estimate expectation values of operators. Numerous solutions have been proposed including the use of adaptive informationally complete positive operator-valued measures (IC-POVMs) by García-Pérez et al. (2021). Adaptive IC-POVMs have shown to improve the precision of estimations of expectation values on quantum computers with better scaling in the number of measurements compared to existing methods. The use of these adaptive IC-POVMs in a VQE allows for more precise energy estimations and additional expectation value estimations of separate operators without any further overhead on the quantum computer. We show that this approach improves upon existing measurement schemes and adds a layer of flexibility, as IC-POVMs represent a form of generalized measurements. In addition to a naive implementation of using IC-POVMs as part of the energy estimations in the VQE, we propose techniques to reduce the number of measurements by adapting the number of measurements necessary for a given energy estimation or through the estimation of the operator variance for a Hamiltonian. We present results for simulations using the former technique, showing that we are able to reduce the number of measurements while retaining the improvement in the measurement precision obtained from IC-POVMs.
  • Vilhonen, Essi (2023)
    Many extensions to the Standard Model of particle physics feature a first-order phase transition in the very early universe. This kind of a phase transition would source gravitational waves through the collision of nucleation bubbles. These in turn could be detected e.g. with the future space-based gravitational wave observatory LISA (Laser Interferometer Space Antenna). Cosmic strings, on the other hand, are line-like topological defects. In this work, we focus on global strings arising from the spontaneous breakdown of a global symmetry. One example of global strings are axionic strings, which are a popular research topic, owing to the role of the axion as a potential dark matter candidate and a solution to the strong CP problem. In this work, our aim is to combine these two sets of early-universe phenomena. We investigate the possibility of creating global strings through the bubble collisions of a first-order phase transition. We use a simplified model with a two-component scalar field to nucleate the bubbles and simulate their expansion, obtaining a short-lived network of global strings in the process. We present results for string lifetime, mean string separations corresponding to different mean bubble separations, and gravitational wave spectra.
  • Seppänen, Kaapo (2021)
    We determine the leading thermal contributions to various self-energies in finite-temperature and -density quantum chromodynamics (QCD). The so-called hard thermal loop (HTL) self-energies are calculated for the quark and gluon fields at one-loop order and for the photon field at two-loop order using the real-time formulation of thermal field theory. In-medium screening effects arising at long wavelengths necessitate the reorganization of perturbative series of thermodynamic quantities. Our results may be directly applied in a reorganization called the HTL resummation, which applies an effective theory for the long-wavelength modes in the medium. The photonic result provides a partial next-to-leading order correction to the current leading-order result and can be later extended to pure QCD with the techniques we develop. The thesis is organized as follows. First, by considering a complex scalar field, we review the main aspects of the equilibrium real-time formalism to build a solid foundation for our thermal field theoretic calculations. Then, these concepts are generalized to QCD, and the properties of the QCD self-energies are thoroughly studied. We discuss the long-wavelength collective behavior of thermal QCD and introduce the HTL theory, outlining also the main motivations for our calculations. The explicit computations of self-energies are presented in extensive detail to highlight the computational techniques we employ.
  • Korhonen, Teo Ilmari (2022)
    Flares are short, high-energy magnetic events on stars, including the Sun. Observations of young stars and red dwarfs regularly show the occurrence of flare events multiple orders of magnitude more energetic than even the fiercest solar storms ever recorded. As our technology remains vulnerable to disruptions due to space weather, the study of flares and other stellar magnetic activity is crucial. Until recently, the detection of extrasolar flares has required much manual work and observation resources. This work presents a mostly automatic pipeline to detect and estimate the energies of extrasolar flare events from optical light curves. To model and remove the star's background radiation in spite of complex periodicity, short windows of nonlinear support vector regression are used to form a multi-model consensus. Outliers above the background are flagged as likely flare events, and a template model is fitted to the flux residual to estimate the energy. This approach is tested on light curves collected from the stars AB Doradus and EK Draconis by the Transiting Exoplanet Survey Satellite, and dozens of flare events are found. The results are consistent with recent literature, and the method is generalizable for further observations with different telescopes and different stars. Challenges remain regarding edge cases, uncertainties, and reliance on user input.
  • Savola, Mikko (2024)
    In this study we use mutual information to characterise statistical dependencies of seed and rel- ativistic electron fluxes in the Earth’s radiation belts on ultra-low frequency (ULF) wave power measured on the ground and at geostationary orbit. The benefit of mutual information, in com- parison to measures such as the Pearson correlation, lies in its capacity to distinguish non-linear dependencies from linear ones. We replicate the methodology in Simms et al. [2014] andwe also calculatethe conditional mutual information, CMI, between ULF spectral power and the electron fluxes, when conditioned on the solar wind speed V . The results of the replication are similar to those presented in Simms et al. [2014] and show low to moderate correlations between ULF Pc5 waves (2-7 mHz electromagnetic waves) and both relativistic and seed electron fluxes, with the correlations ranging from 0.18 to 0.65. The mutual information between ULF Pc5 spectral power and the relativistic electron flux is between 0.17 and 0.22 depending on whether it is evaluated for a storm’s main or recovery phase. The corresponding values of mutual information between ULF Pc5 spectral power and the seed electron flux are between 0.33 and 0.41. All the values of mutual information are statistically significant with a confidence interval of at least 8 standard deviations, except for the mutual information between ground-based ULF measurements and the relativistic electron flux. Highest values of CMI are obtained for roughly V > 600km/s, which also give the largest significance ratios for CMI, implying that under conditions with high solar wind speeds the dependence between ULF Pc5 spectral power and the electron fluxes at the outer radiation belts is higher than for lower solar wind speeds. The mutual information and CMI between the ULF spectral power and the seed electron fluxes is larger and up to twice as high as between ULF spectral power and the relativistic electron flux. The mutual information between average ULF spectral power and the peak electron flux after a storm is also higher than the regular mutual information, giving indication of a dependence whose timing might vary on a scale of days.
  • Muff, Jake (2023)
    Quantum Monte Carlo (QMC) is an accurate but computationally expensive technique for simulating the electronic structure of solids, with its use as a simulation technique for modelling positron states and annihilation in solids relatively new. These simulations can support positron annihilation spectroscopy and help with defect characterisation in solids and vacancy identification by calculating the positron lifetime with increased accuracy and comparing them to experimental results. One method of reducing the computational cost of simulations whilst maintaining chemical accuracy is to employ pseudopotentials. Pseudopotentials are a method to approximate the interactions between the outer valence electrons of an atom and the inner core electrons, which are difficult to model. By replacing the core electrons of an atom with an effective potential, a level of accuracy can be maintained whilst reducing the computational cost. This work extends existing research with a new set of pseudopotentials with fewer core electrons replaced by an effective potential, leading to an increase in the number of core electrons in the simulation. With the inclusion of additional core electrons into the simulation, the corrections that need to be made to the positron lifetime may not be needed. Silicon is chosen as the element under study as the high electron count makes it difficult to model accurately for positron simulations. The suitability of these new pseudopotentials for QMC is shown by calculating the cohesive and relaxation energy with comparisons made to previously used pseudopotentials. The positron lifetime is calculated from QMC simulations and compared against experimental and theoretical values. The simulation method and challenges due to the inclusion of more core electrons are presented and discussed. The results show that these pseudopotentials are suitable for use in QMC studies, including positron lifetime studies. With the inclusion of more core electrons into the simulation a positron lifetime was calculated with similar accuracy to previous studies, without the need for corrections, proving the validity of the pseudopotentials for use in positron studies. The validation of these pseudopotentials enables future theoretical studies to better capture the annihilation characteristics in cases where core electrons are important. In achieving these results, it was found that energy minimisation rather than variance minimisation was needed for optimising the wavefunction with these pseudopotentials.
  • Seppä, Riikka (2023)
    The purpose of this work is to investigate the scaling of ’t Hooft-Polyakov monopoles in the early universe. These monopoles are a general prediction of a grand unified theory phase transition in the early universe. Understanding the behavior of monopoles in the early universe is thus important. We tentatively find a scaling for monopole separation which predicts that the fraction of the universe’s energy in monopoles remains constant in the radiation era, regardless of initial monopole density. We perform lattice simulations on an expanding lattice with a cosmological background. We use the simplest fields which produce ’t Hooft-Polyakov monopoles, namely the SU(2) gauge fields and a Higgs field in the adjoint representation. We initialize the fields such that we can control the initial monopole density. At the beginning of the simulations, a damping phase is performed to suppress nonphysical fluctuations in the fields, which are remnants from the initialization. The fields are then evolved according to the discretized field equations. Among other things, the number of monopoles is counted periodically during the simulation. To extend the dynamical range of the runs, the Press-Spergel-Ryden method is used to first grow the monopole size before the main evolution phase. There are different ways to estimate the average separation between monopoles in a monopole network, as well as to estimate the root mean square velocity of the monopoles. We use these estimators to find out how the average separation and velocity evolve during the runs. To find the scaling solution of the system, we fit the separation estimate on a function of conformal time. This way we find that the average separation ξ depends on conformal time η as ξ ∝ η^(1/3) , which indicates that the monopole density scales in conformal time the same way as the critical energy density of the universe. We additionally find that the velocity measured with the velocity estimators depends on the separation as approximately v ∝ dξ/dη. It’s been shown that a possible grand unified phase transition would produce an abundance of ’t Hooft-Polyakov monopoles and that some of these would survive to the present day and begin to dominate the energy density of the universe. Our result seemingly disagrees with this prediction, though there are several reasons why the predictions might not be compatible with the model we simulate. For one, in our model the monopoles do not move with thermal velocities, unlike what most of the predictions assume happens in the early universe. Thus future work of simulations with thermal velocities added would be needed. Additionally we ran simulations only in the radiation dominated era of the universe. During the matter domination era, the monopoles might behave differently.