Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by master's degree program "Magisterprogrammet i teoretiska och beräkningsmetoder"

Sort by: Order: Results:

  • Korhonen, Keijo (2022)
    The variational quantum eigensolver (VQE) is one of the most promising proposals for a hybrid quantum-classical algorithm made to take advantage of near-term quantum computers. With the VQE it is possible to find ground state properties of various of molecules, a task which many classical algorithms have been developed for, but either become too inaccurate or too resource-intensive especially for so called strongly correlated problems. The advantage of the VQE comes in the ability of a quantum computer to represent a complex system with fewer so-called qubits than a classical computer would with bits, thus making the simulation of large molecules possible. One of the major bottlenecks for the VQE to become viable for simulating large molecules however, is the scaling of the number of measurements necessary to estimate expectation values of operators. Numerous solutions have been proposed including the use of adaptive informationally complete positive operator-valued measures (IC-POVMs) by García-Pérez et al. (2021). Adaptive IC-POVMs have shown to improve the precision of estimations of expectation values on quantum computers with better scaling in the number of measurements compared to existing methods. The use of these adaptive IC-POVMs in a VQE allows for more precise energy estimations and additional expectation value estimations of separate operators without any further overhead on the quantum computer. We show that this approach improves upon existing measurement schemes and adds a layer of flexibility, as IC-POVMs represent a form of generalized measurements. In addition to a naive implementation of using IC-POVMs as part of the energy estimations in the VQE, we propose techniques to reduce the number of measurements by adapting the number of measurements necessary for a given energy estimation or through the estimation of the operator variance for a Hamiltonian. We present results for simulations using the former technique, showing that we are able to reduce the number of measurements while retaining the improvement in the measurement precision obtained from IC-POVMs.
  • Pelttari, Hannu (2020)
    Federated learning is a method to train a machine learning model on multiple remote datasets without the need to gather the data from the remote sites to a central location. In healthcare, gathering the data from different hospitals into a central location can be a difficult and time-consuming task, due to privacy concerns and regulations regarding the use of sensitive data, making federated learning an attractive alternative to more traditional methods. This thesis adapted an existing federated gradient boosting model and developed a new federated random forest model and applied them to mortality prediction in intensive care units. The results were then compared to the centralized counterparts of the models. The results showed that while the federated models did not perform as well as the centralized models on a similar sized dataset, the federated random forest model can achieve superior performance when trained on multiple hospitals' data compared to centralized models trained on a single hospital. In scenarios where the centralized models had data from multiple hospitals the federated models could not perform as well as the centralized models. It was also found that the performance of the centralized models could not be improved with further federated training. In addition to practical advantages such as possibility of parallel or asynchronous training without modifications to the algorithm, the federated random forest performed better in all scenarios compared to the federated gradient boosting. The performance of the federated random forest was also found to be more consistent over different scenarios than the performance of federated gradient boosting, which was highly dependent on factors such as the order with the hospitals were traversed.
  • Niedermeier, Marcel (2021)
    Matrix product states provide an efficient parametrisation of low-entanglement many-body quantum states. In this thesis, the underlying theory is developed from scratch, requiring only basic notions of quantum mechanics and quantum information theory. A full introduction to matrix product state algebra and matrix product operators is given, culminating in the derivation of the density matrix renormalisation group algorithm. The latter provides a simple variational scheme to determine the ground state of arbitrary one-dimensional many-body quantum systems with supreme precision. As an application of matrix-product state technology, the kernel polynomial method is introduced in detail as a state-of-the art numerical tool to find the spectral function or the dynamical correlator of a given quantum system. This in turn gives access to the elementary excitations of the system, such that the locations of the low-energy eigenstates can be studied directly in real space. To illustrate those theoretical tools concretely, the ground state energy, the entanglement entropy and the elementary excitations of a simple interface model of a Heisenberg ferromagnet and a Heisenberg antiferromagnet are studied. By changing the location of the model in parameter space, the dependence of the above-mentioned quantities on the transverse field and the coupling strength is investigated. Most notably, we find that the entanglement entropy characteristic to the antiferromagnetic ground state stretches across the interface into the ferromagnetic half-chain. The dependence of the physics on the value of the coupling strength is, overall, small, with exception of the appearance of a boundary mode whose eigenenergy grows with the coupling. A comparison with a localised edge field shows however that the boundary mode is a true interaction effect of the two half-chains. Various algorithmic and physics extensions of the present project are discussed, such that the code written as part of this thesis could be turned into a state-of-the-art MPS library with managable effort. In particular, an application of the kernel polynomial method to calculate finite-temperature correlators is derived in detail.
  • Vilhonen, Essi (2023)
    Many extensions to the Standard Model of particle physics feature a first-order phase transition in the very early universe. This kind of a phase transition would source gravitational waves through the collision of nucleation bubbles. These in turn could be detected e.g. with the future space-based gravitational wave observatory LISA (Laser Interferometer Space Antenna). Cosmic strings, on the other hand, are line-like topological defects. In this work, we focus on global strings arising from the spontaneous breakdown of a global symmetry. One example of global strings are axionic strings, which are a popular research topic, owing to the role of the axion as a potential dark matter candidate and a solution to the strong CP problem. In this work, our aim is to combine these two sets of early-universe phenomena. We investigate the possibility of creating global strings through the bubble collisions of a first-order phase transition. We use a simplified model with a two-component scalar field to nucleate the bubbles and simulate their expansion, obtaining a short-lived network of global strings in the process. We present results for string lifetime, mean string separations corresponding to different mean bubble separations, and gravitational wave spectra.
  • Lankinen, Juhana (2020)
    Due to the unique properties of foams, they can be found in many different applications in a wide variety of fields. The study of foams is also useful for the many properties they share with other phenomena, like impurities in cooling metals, where the impurities coarsen similarly to bubbles in foams. For these and other reasons foams have been studied extensively for over a hundred years and continue being an interesting area of study today due to new insights in both experimental and theoretical work and new applications waiting to be used and realized in different industries. The most impactful early work in the study of the properties of foams was done in the late 1800s by Plateau. His work was extended in the early to mid-1900s by Lifshitz, Slyozov, Wagner and von Neumann and by many more authors in recent years. The early work was mostly experimental or theoretical in the sense of performing mathematical calculations on paper, while the modern methods of study have kept the experimental part -- with more refined methods of measurement of course -- but shifted towards the implementation of the theory as simulations instead of solving problems on paper. In the early 90s Durian proposed a new method for simulating the mechanics of wet foams, based on repulsive spring-like forces between neighboring bubbles. This model was later extended to allow for the coarsening of the foam, and a slightly changed version of this model has been implemented in the code presented in this thesis. As foams consist of a very large number of bubbles, it is important to be able to simulate sufficiently large systems to realistically study the physics of foams. Very large systems have traditionally been too slow to simulate on the individual bubble level in the past, but thanks to the popularity of computer games and the continuous demand for better graphics in games, the graphics processing units have become very powerful and can nowadays be used to do highly parallel general computing. In this thesis, a modified version of Durian's wet foam model that runs on the GPU is presented. The code has been implemented in modern C++ using Nvidia's CUDA on the GPU. Using this program first a typical two-dimensional foam is simulated with 100000 bubbles. It is found that the simulation code replicates the expected behaviour for this kind of foam. After this, a more detailed analysis is done of a novel phenomenon of the separation of liquid and gas phases in low gas fraction foams that arises only with sufficiently large system sizes. It is found that the phase separation causes the foam to evolve as would a foam of higher gas fraction until the phases have mixed back together. It is hypothesized that the reason causing the phase separation is related to uneven energy distribution in the foam, which itself is related to jamming and uneven distribution of the sizes of the bubbles in the foam.
  • Seppänen, Kaapo (2021)
    We determine the leading thermal contributions to various self-energies in finite-temperature and -density quantum chromodynamics (QCD). The so-called hard thermal loop (HTL) self-energies are calculated for the quark and gluon fields at one-loop order and for the photon field at two-loop order using the real-time formulation of thermal field theory. In-medium screening effects arising at long wavelengths necessitate the reorganization of perturbative series of thermodynamic quantities. Our results may be directly applied in a reorganization called the HTL resummation, which applies an effective theory for the long-wavelength modes in the medium. The photonic result provides a partial next-to-leading order correction to the current leading-order result and can be later extended to pure QCD with the techniques we develop. The thesis is organized as follows. First, by considering a complex scalar field, we review the main aspects of the equilibrium real-time formalism to build a solid foundation for our thermal field theoretic calculations. Then, these concepts are generalized to QCD, and the properties of the QCD self-energies are thoroughly studied. We discuss the long-wavelength collective behavior of thermal QCD and introduce the HTL theory, outlining also the main motivations for our calculations. The explicit computations of self-energies are presented in extensive detail to highlight the computational techniques we employ.
  • Polus, Aku (2021)
    We begin by discussing the essential concepts within the standard cosmology where the dark matter is "cold" and collisionless. We consider the structure formation in the dark matter component and present problems faced by the standard cosmology as well as some prospects for the solutions to those. The main problem considered in this work is the tension in the value of the Hubble constant measured with different procedures. We present the theories behind the procedures, and conclude the study of the tension by considering the most notable interpretations for the reason behind it. We then set up a proposal for an alternative model describing the dark sector. It is a hidden copy of the visible sector electromagnetism, allowing for a radiative cooling in virializing structures. By assuming first an asymmetric particle content, we study which scales of the dark matter halos are eligible to collapse into dense structure. Acquiring a mass function then allows to conclude how much from the total dark matter component is expected to collapse. If instead the dark matter particle content is taken to be symmetric, the collapsed fraction is assumed to annihilate into dark radiation. With certain modifications to the freely available Boltzmann code CAMB, we construct to the code a representation of the cosmology defined by our model. Lastly we use the modified cosmology to create a fit to the data defining the Hubble constant, and see for the relief of the tension. We find that our model provides a reasonable history for the energy content of the universe, and a notable relief to the Hubble tension, although the improvement is only a minor one compared to some more modest modifications to the cosmology.
  • Korhonen, Teo Ilmari (2022)
    Flares are short, high-energy magnetic events on stars, including the Sun. Observations of young stars and red dwarfs regularly show the occurrence of flare events multiple orders of magnitude more energetic than even the fiercest solar storms ever recorded. As our technology remains vulnerable to disruptions due to space weather, the study of flares and other stellar magnetic activity is crucial. Until recently, the detection of extrasolar flares has required much manual work and observation resources. This work presents a mostly automatic pipeline to detect and estimate the energies of extrasolar flare events from optical light curves. To model and remove the star's background radiation in spite of complex periodicity, short windows of nonlinear support vector regression are used to form a multi-model consensus. Outliers above the background are flagged as likely flare events, and a template model is fitted to the flux residual to estimate the energy. This approach is tested on light curves collected from the stars AB Doradus and EK Draconis by the Transiting Exoplanet Survey Satellite, and dozens of flare events are found. The results are consistent with recent literature, and the method is generalizable for further observations with different telescopes and different stars. Challenges remain regarding edge cases, uncertainties, and reliance on user input.
  • Seshadri, Sangita (2020)
    Blurring is a common phenomenon during image formation due to various factors like motion between the camera and the object, or atmospheric turbulence, or when the camera fails to have the object in focus, which leads to degradation in the image formation process. This leads to the pixels interacting with the neighboring ones, and the captured image is blurry as a result. This interaction with the neighboring pixels, is the 'spread' which is represented by the Point Spread Function. Image deblurring has many applications, for example in Astronomy, medical imaging, where extracting the exact image required might not be possible due to various limiting factors, and what we get is a deformed image. In such cases, it is necessary to use an apt deblurring algorithm keeping all necessary factors like performance and time in mind. This thesis analyzes the performance of learning and analytical methods in Image deblurring Algorithm. Inverse problems would be discussed at first, and how ill posed inverse problems like image deblurring cannot be tackled by naive deconvolution. This is followed by looking at the need for regularization, and how it is necessary to control the fluctuations resulting from extreme sensitivity to noise. The Image reconstruction problem has the form of a convex variational problem, and its prior knowledge acting as the inequality constraints which creates a feasible region for the optimal solution. Interior point methods iterates over and over within this feasible region. This thesis uses the iRestNet Method, which uses the Forward Backward iterative approach for the Machine learning algorithm, and Total Variation approach implemented using the FlexBox tool for analytical method, which uses the Primal Dual approach. The performance is measured using SSIM indices for a range of kernels, the SSIM map is also analyzed for comparing the deblurring efficiency.
  • Besel, Vitus (2020)
    We investigated the impact of various parameters on new particle formation rates predicted for the sulfuric acid - ammonia system using cluster distribution dynamics simulations, in our case ACDC (Atmospheric Cluster Dynamics Code). The predicted particle formation rates increase significantly if rotational symmetry number of monomers (sulfuric acid and ammonia molecules, and bisulfate and ammonium ions) are considered in the simulation. On the other hand, inclusion of the rotational symmetry number of the clusters only changes the results slightly, and only in conditions where charged clusters dominate the particle formation rate because most of the clusters stable enough to participate in new particle formation display no symmetry, therefore have a rotational symmetry number of one, and the few exceptions to this rule are positively charged. Further, we tested the influence of the application of a quasi-harmonic correction for low-frequency vibrational modes. Generally, this decreases predicted new particle formation rates, and significantly alters the shape of the formation rate curve plotted against the sulfuric acid concentration. We found that the impact of the maximum size of the clusters explicitly included in the simulations depends on the simulated conditions and the errors due to the limited set of clusters simulated generally increase with temperature, and decrease with vapor concentrations. The boundary conditions for clusters that are counted as formed particles (outgrowing clusters) have only a small influence on the results, provided that the definition is chemically reasonable and the set of simulated clusters is sufficiently large. We compared predicted particle formation rates with experimental data measured at the CLOUD (Cosmics Leaving OUtdoor Droplets) chamber. A cluster distribution dynamics model shows improved agreement with experiments when using our new input data and the proposed combination of symmetry and quasi-harmonic corrections., compared to an earlier study based on older quantum chemical data.
  • Ihalainen, Olli (2019)
    The Earth’s Bond albedo is the fraction of total reflected radiative flux emerging from the Earth’s Top of the Atmosphere (ToA) to the incident solar radiation. As such, it is a crucial component in modeling the Earth’s climate. This thesis presents a novel method for estimating the Earth’s Bond albedo, utilising the dynamical effects of Earth radiation pressure on satellite orbits that are directly related to the Bond albedo. Where current methods for estimating the outgoing reflected radiation are based on point measurements of the radiance reflected by the Earth taken in the proximity of the planet, the new method presented in this thesis makes use of the fact that Global Positioning Satellites (GPS) together view the entirety of the ToA surface. The theoretical groundwork is laid for this new method starting from the basic principles of light scattering, satellite dynamics, and Bayesian inference. The feasibility of the method is studied numerically using synthetic data generated from real measurements of GPS satellite orbital elements and the imaging data from the Earth Polychromatic Imaging Camera (EPIC) aboard the Deep Space Climate Observatory (DSCOVR) spacecraft. The numerical methods section introduces the methods used for forward modeling the ToA outgoing radiation, the Runge-Kutta method for integrating the satellite orbits and the virtual-observation Markov-chain Monte Carlo methods used for solving the inverse problem. The section also describes a simple clustering method used for classifying the ToA from EPIC images. The inverse problem was studied with very simple models for the ToA, the satellites, and the satellite dynamics. These initial results were promising as the inverse problem algorithm was able to accurately estimate the Bond albedo. Further study of the method is required to determine how the inverse problem algorithm works when more realism is added to the models.
  • Muff, Jake (2023)
    Quantum Monte Carlo (QMC) is an accurate but computationally expensive technique for simulating the electronic structure of solids, with its use as a simulation technique for modelling positron states and annihilation in solids relatively new. These simulations can support positron annihilation spectroscopy and help with defect characterisation in solids and vacancy identification by calculating the positron lifetime with increased accuracy and comparing them to experimental results. One method of reducing the computational cost of simulations whilst maintaining chemical accuracy is to employ pseudopotentials. Pseudopotentials are a method to approximate the interactions between the outer valence electrons of an atom and the inner core electrons, which are difficult to model. By replacing the core electrons of an atom with an effective potential, a level of accuracy can be maintained whilst reducing the computational cost. This work extends existing research with a new set of pseudopotentials with fewer core electrons replaced by an effective potential, leading to an increase in the number of core electrons in the simulation. With the inclusion of additional core electrons into the simulation, the corrections that need to be made to the positron lifetime may not be needed. Silicon is chosen as the element under study as the high electron count makes it difficult to model accurately for positron simulations. The suitability of these new pseudopotentials for QMC is shown by calculating the cohesive and relaxation energy with comparisons made to previously used pseudopotentials. The positron lifetime is calculated from QMC simulations and compared against experimental and theoretical values. The simulation method and challenges due to the inclusion of more core electrons are presented and discussed. The results show that these pseudopotentials are suitable for use in QMC studies, including positron lifetime studies. With the inclusion of more core electrons into the simulation a positron lifetime was calculated with similar accuracy to previous studies, without the need for corrections, proving the validity of the pseudopotentials for use in positron studies. The validation of these pseudopotentials enables future theoretical studies to better capture the annihilation characteristics in cases where core electrons are important. In achieving these results, it was found that energy minimisation rather than variance minimisation was needed for optimising the wavefunction with these pseudopotentials.
  • Seppä, Riikka (2023)
    The purpose of this work is to investigate the scaling of ’t Hooft-Polyakov monopoles in the early universe. These monopoles are a general prediction of a grand unified theory phase transition in the early universe. Understanding the behavior of monopoles in the early universe is thus important. We tentatively find a scaling for monopole separation which predicts that the fraction of the universe’s energy in monopoles remains constant in the radiation era, regardless of initial monopole density. We perform lattice simulations on an expanding lattice with a cosmological background. We use the simplest fields which produce ’t Hooft-Polyakov monopoles, namely the SU(2) gauge fields and a Higgs field in the adjoint representation. We initialize the fields such that we can control the initial monopole density. At the beginning of the simulations, a damping phase is performed to suppress nonphysical fluctuations in the fields, which are remnants from the initialization. The fields are then evolved according to the discretized field equations. Among other things, the number of monopoles is counted periodically during the simulation. To extend the dynamical range of the runs, the Press-Spergel-Ryden method is used to first grow the monopole size before the main evolution phase. There are different ways to estimate the average separation between monopoles in a monopole network, as well as to estimate the root mean square velocity of the monopoles. We use these estimators to find out how the average separation and velocity evolve during the runs. To find the scaling solution of the system, we fit the separation estimate on a function of conformal time. This way we find that the average separation ξ depends on conformal time η as ξ ∝ η^(1/3) , which indicates that the monopole density scales in conformal time the same way as the critical energy density of the universe. We additionally find that the velocity measured with the velocity estimators depends on the separation as approximately v ∝ dξ/dη. It’s been shown that a possible grand unified phase transition would produce an abundance of ’t Hooft-Polyakov monopoles and that some of these would survive to the present day and begin to dominate the energy density of the universe. Our result seemingly disagrees with this prediction, though there are several reasons why the predictions might not be compatible with the model we simulate. For one, in our model the monopoles do not move with thermal velocities, unlike what most of the predictions assume happens in the early universe. Thus future work of simulations with thermal velocities added would be needed. Additionally we ran simulations only in the radiation dominated era of the universe. During the matter domination era, the monopoles might behave differently.
  • Kupiainen, Tomi (2020)
    In this work we consider the method of unitarily inequivalent representations in the context of Majorana neutrinos and a simple seesaw model. In addition, the field theoretical framework of neutrino physics, namely that of QFT and the SM, is reviewed. The oscillating neutrino states are expressed via suitable quantum operators acting on the physical vacuum of the theory, which provides further insight to the phenomenological flavor state ansatz made in the standard formulation of neutrino oscillations. We confirm that this method agrees with known results in the ultrarelativistic approximation while extending them to the non-relativistic region.
  • He, Ru (2023)
    Ga2O3 has been found to exhibit excellent radiation hardness properties, making it an ideal candidate for use in a variety of applications that involve exposure to ionizing radiation, such as in space exploration, nuclear power generation, and medical imaging. Understanding the behaviour of Ga2O3 under irradiation is therefore crucial for optimizing its performance in these applications and ensuring their safe and efficient operation. There are five commonly identified polymorphs of Ga2O3 , namely, β, α, γ, δ and structures, among these phases, β-Ga2O3 is the most stable crystal structure and has attracted majority of the recent attention. In this thesis, we used molecular dynamic simulations with the newly developed machine learned Gaussian approximation potentials to investigate the radiation damage in β-Ga2O3 . We inspected the gradual structural change in β-Ga2O3 lattice with increase doses of Frenkel pairs implantations. The results revealed that O-Frenkel pairs have a strong tendency to recombine and return to their original sublattice sites. When Ga- and O-Frenkel pairs are implanted to the same cell, the crystal structure was damaged and converted to an amorphous phase at low doses. However, the accumulation of pure Ga-Frenkel pairs in the simulation cells might induce a transition of β to γ-Ga, while O sublattice remains FCC crystal structure, which theoretically demonstrated the recent experiments finding that β- Ga2O3 transfers to the γ phase following ion implantation. To gain a better understanding of the natural behaviour of β-Ga2O3 under irradiation, we utilized collision cascade simulations. The results revealed that O sublattice in the β-Ga2O3 lattice is robust and less susceptible to damage, despite O atoms having higher mobility. The collision and recrystallization process resulted in a greater accumulation of Ga defects than O defects, regardless of PKA atom type. These further revealed that displaced Ga ion hard to recombine to β- Ga lattice, while the FCC stacking of the O sublattice has very strong tendency to recovery. Our theoretical models on the radiation damage of β-Ga2O3 provide insight into the mechanisms underlying defect generation and recovery during experiment ion implantation, which has significant implications for improving Ga2O3 radiation tolerance, as well as optimizing its electronic and optical properties.
  • Hippeläinen, Antti (2022)
    This thesis reviews state-of-the-art top-down holographic methods used for modeling dense matter in neutron stars. This is done with the help of the Witten-Sakai-Sugimoto (WSS) model, which attempts to construct a holographic version of quantum chromodynamics (QCD) to mimic its features. As a starting chapter, string theory is reviewed in a quick fashion for the reader to understand some of the (historical) developments behind this construction. Bosonic and superstrings are reviewed along conformal field theory, and focus is put on Dp-branes and compactifications of spacetime. This chapter will also explain much of the jargon used in the thesis, which otherwise easily obstructs the main message. After a sufficient understanding of string theory has been achieved, we will move on to holography and holographic dualities in the next chapter, focusing on AdS/CFT and actual computations using holography. Matching of theories is discussed to set up a holographic dictionary. After this, we need to choose either a top-down or a bottom-up approach, from which we will use the former since we are going to use the WSS model. After this comes a brief review of QCD and its central features to be reproduced in holographic QCD. Immediately following this, we will review the Witten-Sakai-Sugimoto model, which is qualitatively and sometimes also quantitatively a reasonable holographic version of QCD. We will discuss WSS’s successes and room for improvement, especially in places that might affect the analysis that we are about to perform on neutron stars. Finally, after all this theoretical development, we will delve into the world of neutron stars. A quick review of the basic features and astrophysical constraints of neutron stars, along with difficulties in modeling them, is given. After this, we will discuss two models of neutron stars, the first one being a toy model with simplified physics and the other a more realistic one. The basic workflow that is required to get to the equation of state data and other relevant observables from a string theoretic action is given step-by-step, and many recent results using this model are reviewed. In the end, the future of the development of the holographic duality, constructing models with it, and modeling of neutron stars is discussed.
  • Duevski, Teodor (2019)
    In this thesis we model the term structure of zero-coupon bonds. Firstly, in the static setting by norm optimization Hilbert space techniques and starting from a set of benchmark fixed income instruments, we obtain a closed from expression for a smooth discount curve. Moving on to the dynamic setting, we describe the stochastic modeling of the fixed income market. Finally, we introduce the Heath-Jarrow-Morton (HJM) methodology. We derive the evolution of zero-coupon bond prices implied by the HJM methodology and prove the HJM drift condition for non arbitrage pricing in the fixed income market under a dynamic setting. Knowing the current discount curve is crucial for pricing and hedging fixed income securities as it is a basic input to the HJM valuation methodology. Starting from the non arbitrage prices of a set of benchmark fixed income instruments, we find a smooth discount curve which perfectly reproduces the current market quotes by minimizing a suitably defined norm related to the flatness of the forward curve. The regularity of the discount curve estimated makes it suitable for use as an input in the HJM methodlogy. This thesis includes a self-contained introduction to the mathematical modeling of the most commonly traded fixed income securities. In addition, we present the mathematical background necessary for modeling the fixed income market in a dynamic setting. Some familiarity with analysis, basic probability theory and functional analysis is assumed.
  • Piispa, Aleksi (2022)
    The nature of dense matter is one of the greatest mysteries in high energy physics. For example, we do not know how QCD matter behaves in neutron star densities as there the matter is strongly coupled. Thus auxiliary methods have to be applied. One of these methods is the AdS/CFT-correspondence. This maps the strongly coupled field theory to weakly coupled gravity theory. The most well known example of this correspondence is the duality between N = 4 Super Yang-Mills and type IIB supergravity in AdS 5 × S 5 . This duality at finite temperature and chemical potential is the one we invoke in our study. It has been hypothesized that the dense matter would be in a color superconducting phase, where pairs of quarks form a condensate. This has natural interpretation in the gravity theory. The AdS 5 × S 5 geometry is sourced by stack of N coincident D3-branes. This N corresponds to the gauge group SU (N ) of N = 4 SYM. Then to study spontaneous breaking of this gauge group, one studies systems where D3-branes have separated from the stack. In this work we present two methods of studying the possibility of separating these branes from the stack. First we present an effective potential for a probe brane, which covers the dynamics of a single D3-brane in the bulk. We do this by using the action principle. Then we construct an effective potential for a shell constructed from multiple branes. We do this by using the Israel junction conditions. Single brane in the bulk corresponds to SU (N ) → SU (N − 1) × U (1) symmetry breaking and a shell of k-branes corresponds to SU (N ) → SU (N − k) × U (1) k symmetry breaking. Similar spontaneous breaking of the gauge group happens in QCD when we transition to a CSC-phase and hence these phases are called color superconducting. We find that for sufficiently high chemical potential the system is susceptible to single brane nucleation. The phase with higher breaking of the gauge group, which corresponds to having shell made out of branes in the bulk, is metastable. This implies that we were able to construct CSC-phases of N = 4 SYM, however, the exact details of the phase diagram structure is left for future research.
  • Hällfors, Jaakko (2023)
    Topological defects are some of the more common phenomena of many extensions of the standard model of particle physics. In some sense, defects are a consequence of an unresolvable misalignment between different regions of the system, much like cracks in ice or kinks in an antiquated telephone cord. In our context, they present themselves as localised inhomogeneities of the fundamental fields, emerging at the boundaries of the misaligned regions at the cost of, potentially massive, trapped energy. Should the cosmological variety exist in nature, they are hypothesised to emerge from some currently unknown cosmological phase transition, leaving their characteristic mark on the evolution of the nascent universe. As of date, so called cosmic strings are perhaps the most promising type of cosmic defect, at least with respect to their observational prospects. Cosmic strings, as the name suggest, are linelike topological defects; exceedingly thin, yet highly energetic. Given the advent of gravitational wave astronomy, a substantial amount of research is devoted to detailed and expensive real-time computer simulations of various cosmic string models in hopes of extracting their effects on the gravitational wave background. In this thesis we discuss the Abelian-Higgs model, a toy model of a gauge theory of a complex scalar field and a real vector field. Through a choice of a symmetry-breaking scalar potential, this model permits line defects, so called local strings. We discuss some generalities of classical field theory as well as those of the interesting mathematical theory of topological defects. We apply these to our model and present the necessary numerical methods for writing our own cosmic string simulation. We use the newly written simulation to reproduce a number of contemporary results on the scaling properties of the string networks and present some preliminary results from a less investigated region of the model parameter space, attempting to compare the effects of different types of string-string interactions. Furthermore, preliminary results are presented on the thermodynamic evolution of the system and the effects a common computational trick, comoving string width, are discussed with respect to the evolution of the equation of state.
  • Mukkula, Olli (2024)
    Quantum computers utilize qubits to store and process quantum information. In superconducting quantum computers, qubits are implemented as quantum superconducting resonant circuits. The circuits are operated only at the two energy states, which form the computational basis for the qubit. To suppress leakage to uncomputational states, superconducting qubits are designed to be anharmonic oscillators, which is achieved using one or more Josephson junctions, a nonlinear superconducting element. One of the main challenges in developing quantum computers is minimizing the decoherence caused by environmental noise. Decoherence is characterized by two coherence times, T1 for depolarization processes and T2 for dephasing. This thesis reviews and investigates the decoherence properties of superconducting qubits. The main goal of the thesis is to analyze the tradeoff between anharmonicity and dephasing in a qubit unimon. Recently developed unimon incorporates a single Josephson junction shunted by a linear inductor and a capacitor. Unimon is tunable by external magnetic flux, and at the half flux quantum bias, the Josephson energy is partially canceled by the inductive energy, allowing unimon to have relatively high anharmonicity while remaining fully protected against low-frequency charge noise. In addition, at the sweet spot with respect to the magnetic flux, unimon becomes immune to first-order perturbations in the flux. The sweet spot, however, is relatively narrow, making unimon susceptible to dephasing through the quadratic coupling to the flux noise. In the first chapter of this thesis, we present a comprehensive look into the basic theory of superconducting qubits, starting with two-state quantum systems, followed by superconductivity and superconducting circuit elements, and finally combining these two by introducing circuit quantum electrodynamics (cQED), a framework for building superconducting qubits. We follow with a theoretical discussion of decoherence in two-state quantum systems, described by the Bloch-Redfield formalism. We continue the discussion by estimating decoherence using perturbation theory, with special care put into the dephasing due to the low-frequency 1/f noise. Finally, we review the theoretical model of unimon, which is used in the numerical analysis. As a main result of this thesis, we suggest a design parameter regime for unimon, which gives the best ratio between anharmonicity and T2.