Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by master's degree program "Teoreettisten ja laskennallisten menetelmien maisteriohjelma (Theoretical Calculation Methods)"

Sort by: Order: Results:

  • He, Ru (2023)
    Ga2O3 has been found to exhibit excellent radiation hardness properties, making it an ideal candidate for use in a variety of applications that involve exposure to ionizing radiation, such as in space exploration, nuclear power generation, and medical imaging. Understanding the behaviour of Ga2O3 under irradiation is therefore crucial for optimizing its performance in these applications and ensuring their safe and efficient operation. There are five commonly identified polymorphs of Ga2O3 , namely, β, α, γ, δ and structures, among these phases, β-Ga2O3 is the most stable crystal structure and has attracted majority of the recent attention. In this thesis, we used molecular dynamic simulations with the newly developed machine learned Gaussian approximation potentials to investigate the radiation damage in β-Ga2O3 . We inspected the gradual structural change in β-Ga2O3 lattice with increase doses of Frenkel pairs implantations. The results revealed that O-Frenkel pairs have a strong tendency to recombine and return to their original sublattice sites. When Ga- and O-Frenkel pairs are implanted to the same cell, the crystal structure was damaged and converted to an amorphous phase at low doses. However, the accumulation of pure Ga-Frenkel pairs in the simulation cells might induce a transition of β to γ-Ga, while O sublattice remains FCC crystal structure, which theoretically demonstrated the recent experiments finding that β- Ga2O3 transfers to the γ phase following ion implantation. To gain a better understanding of the natural behaviour of β-Ga2O3 under irradiation, we utilized collision cascade simulations. The results revealed that O sublattice in the β-Ga2O3 lattice is robust and less susceptible to damage, despite O atoms having higher mobility. The collision and recrystallization process resulted in a greater accumulation of Ga defects than O defects, regardless of PKA atom type. These further revealed that displaced Ga ion hard to recombine to β- Ga lattice, while the FCC stacking of the O sublattice has very strong tendency to recovery. Our theoretical models on the radiation damage of β-Ga2O3 provide insight into the mechanisms underlying defect generation and recovery during experiment ion implantation, which has significant implications for improving Ga2O3 radiation tolerance, as well as optimizing its electronic and optical properties.
  • Hippeläinen, Antti (2022)
    This thesis reviews state-of-the-art top-down holographic methods used for modeling dense matter in neutron stars. This is done with the help of the Witten-Sakai-Sugimoto (WSS) model, which attempts to construct a holographic version of quantum chromodynamics (QCD) to mimic its features. As a starting chapter, string theory is reviewed in a quick fashion for the reader to understand some of the (historical) developments behind this construction. Bosonic and superstrings are reviewed along conformal field theory, and focus is put on Dp-branes and compactifications of spacetime. This chapter will also explain much of the jargon used in the thesis, which otherwise easily obstructs the main message. After a sufficient understanding of string theory has been achieved, we will move on to holography and holographic dualities in the next chapter, focusing on AdS/CFT and actual computations using holography. Matching of theories is discussed to set up a holographic dictionary. After this, we need to choose either a top-down or a bottom-up approach, from which we will use the former since we are going to use the WSS model. After this comes a brief review of QCD and its central features to be reproduced in holographic QCD. Immediately following this, we will review the Witten-Sakai-Sugimoto model, which is qualitatively and sometimes also quantitatively a reasonable holographic version of QCD. We will discuss WSS’s successes and room for improvement, especially in places that might affect the analysis that we are about to perform on neutron stars. Finally, after all this theoretical development, we will delve into the world of neutron stars. A quick review of the basic features and astrophysical constraints of neutron stars, along with difficulties in modeling them, is given. After this, we will discuss two models of neutron stars, the first one being a toy model with simplified physics and the other a more realistic one. The basic workflow that is required to get to the equation of state data and other relevant observables from a string theoretic action is given step-by-step, and many recent results using this model are reviewed. In the end, the future of the development of the holographic duality, constructing models with it, and modeling of neutron stars is discussed.
  • Piispa, Aleksi (2022)
    The nature of dense matter is one of the greatest mysteries in high energy physics. For example, we do not know how QCD matter behaves in neutron star densities as there the matter is strongly coupled. Thus auxiliary methods have to be applied. One of these methods is the AdS/CFT-correspondence. This maps the strongly coupled field theory to weakly coupled gravity theory. The most well known example of this correspondence is the duality between N = 4 Super Yang-Mills and type IIB supergravity in AdS 5 × S 5 . This duality at finite temperature and chemical potential is the one we invoke in our study. It has been hypothesized that the dense matter would be in a color superconducting phase, where pairs of quarks form a condensate. This has natural interpretation in the gravity theory. The AdS 5 × S 5 geometry is sourced by stack of N coincident D3-branes. This N corresponds to the gauge group SU (N ) of N = 4 SYM. Then to study spontaneous breaking of this gauge group, one studies systems where D3-branes have separated from the stack. In this work we present two methods of studying the possibility of separating these branes from the stack. First we present an effective potential for a probe brane, which covers the dynamics of a single D3-brane in the bulk. We do this by using the action principle. Then we construct an effective potential for a shell constructed from multiple branes. We do this by using the Israel junction conditions. Single brane in the bulk corresponds to SU (N ) → SU (N − 1) × U (1) symmetry breaking and a shell of k-branes corresponds to SU (N ) → SU (N − k) × U (1) k symmetry breaking. Similar spontaneous breaking of the gauge group happens in QCD when we transition to a CSC-phase and hence these phases are called color superconducting. We find that for sufficiently high chemical potential the system is susceptible to single brane nucleation. The phase with higher breaking of the gauge group, which corresponds to having shell made out of branes in the bulk, is metastable. This implies that we were able to construct CSC-phases of N = 4 SYM, however, the exact details of the phase diagram structure is left for future research.
  • Hällfors, Jaakko (2023)
    Topological defects are some of the more common phenomena of many extensions of the standard model of particle physics. In some sense, defects are a consequence of an unresolvable misalignment between different regions of the system, much like cracks in ice or kinks in an antiquated telephone cord. In our context, they present themselves as localised inhomogeneities of the fundamental fields, emerging at the boundaries of the misaligned regions at the cost of, potentially massive, trapped energy. Should the cosmological variety exist in nature, they are hypothesised to emerge from some currently unknown cosmological phase transition, leaving their characteristic mark on the evolution of the nascent universe. As of date, so called cosmic strings are perhaps the most promising type of cosmic defect, at least with respect to their observational prospects. Cosmic strings, as the name suggest, are linelike topological defects; exceedingly thin, yet highly energetic. Given the advent of gravitational wave astronomy, a substantial amount of research is devoted to detailed and expensive real-time computer simulations of various cosmic string models in hopes of extracting their effects on the gravitational wave background. In this thesis we discuss the Abelian-Higgs model, a toy model of a gauge theory of a complex scalar field and a real vector field. Through a choice of a symmetry-breaking scalar potential, this model permits line defects, so called local strings. We discuss some generalities of classical field theory as well as those of the interesting mathematical theory of topological defects. We apply these to our model and present the necessary numerical methods for writing our own cosmic string simulation. We use the newly written simulation to reproduce a number of contemporary results on the scaling properties of the string networks and present some preliminary results from a less investigated region of the model parameter space, attempting to compare the effects of different types of string-string interactions. Furthermore, preliminary results are presented on the thermodynamic evolution of the system and the effects a common computational trick, comoving string width, are discussed with respect to the evolution of the equation of state.
  • Mukkula, Olli (2024)
    Quantum computers utilize qubits to store and process quantum information. In superconducting quantum computers, qubits are implemented as quantum superconducting resonant circuits. The circuits are operated only at the two energy states, which form the computational basis for the qubit. To suppress leakage to uncomputational states, superconducting qubits are designed to be anharmonic oscillators, which is achieved using one or more Josephson junctions, a nonlinear superconducting element. One of the main challenges in developing quantum computers is minimizing the decoherence caused by environmental noise. Decoherence is characterized by two coherence times, T1 for depolarization processes and T2 for dephasing. This thesis reviews and investigates the decoherence properties of superconducting qubits. The main goal of the thesis is to analyze the tradeoff between anharmonicity and dephasing in a qubit unimon. Recently developed unimon incorporates a single Josephson junction shunted by a linear inductor and a capacitor. Unimon is tunable by external magnetic flux, and at the half flux quantum bias, the Josephson energy is partially canceled by the inductive energy, allowing unimon to have relatively high anharmonicity while remaining fully protected against low-frequency charge noise. In addition, at the sweet spot with respect to the magnetic flux, unimon becomes immune to first-order perturbations in the flux. The sweet spot, however, is relatively narrow, making unimon susceptible to dephasing through the quadratic coupling to the flux noise. In the first chapter of this thesis, we present a comprehensive look into the basic theory of superconducting qubits, starting with two-state quantum systems, followed by superconductivity and superconducting circuit elements, and finally combining these two by introducing circuit quantum electrodynamics (cQED), a framework for building superconducting qubits. We follow with a theoretical discussion of decoherence in two-state quantum systems, described by the Bloch-Redfield formalism. We continue the discussion by estimating decoherence using perturbation theory, with special care put into the dephasing due to the low-frequency 1/f noise. Finally, we review the theoretical model of unimon, which is used in the numerical analysis. As a main result of this thesis, we suggest a design parameter regime for unimon, which gives the best ratio between anharmonicity and T2.
  • Hatakka, Ilari (2023)
    In quantum field theory the objects of interest are the n-point vacuum expectations which can be calculated from the path integral. The path integral usually used in physics is not well-defined and the main motivation for this thesis is to give axioms that a well-defined path integral candidate has to at least satisfy for it to be physically relevant - we want the path integral to have properties which allow us to reconstruct the physically interesting objects from it. The axioms given in this thesis are called the Osterwalder-Schrader axioms and the reconstruction of the physical objects from the path integral satisfying the axioms is called the Osterwalder-Schrader reconstruction. The Osterwalder-Schrader axioms are special in the sense that they are stated in terms of the Euclidean spacetime instead of the physically relevant Minkowski spacetime. As the physical objects live in Minkowski spacetime this means that when reconstructing the physically relevant objects we have to go back to Minkowski spacetime at some point. This thesis has three parts (and an introduction). In the first part we give a brief introduction to parts of functional analysis which we will need later - theory about distributions and about generators of families of operators. The second part is about the Osterwalder-Schrader axioms and about the reconstruction of the physically relevant objects from the path integral. In the last part we check that the path integral for the free field of mass m satisfies the Osterwalder-Schrader axioms.
  • Tyree, Juniper (2023)
    Response Surface Models (RSM) are cheap, reduced complexity, and, usually, statistical models that are fit to the response of more complex models to approximate their outputs with higher computational efficiency. In atmospheric science, there has been a continuous push to reduce the amount of training data required to fit an RSM. With this reduction in costly data gathering, RSMs can be used more ad hoc and quickly adapted to new applications. However, with the decrease in diverse training data, the risk increases that the RSM is eventually used on inputs on which it cannot make a prediction. If there is no indication from the model that its outputs can no longer be trusted, trust in an entire RSM decreases. We present a framework for building prudent RSMs that always output predictions with confidence and uncertainty estimates. We show how confidence and uncertainty can be propagated through downstream analysis such that even predictions on inputs outside the training domain or in areas of high variance can be integrated. Specifically, we introduce the Icarus RSM architecture, which combines an out-of-distribution detector, a prediction model, and an uncertainty quantifier. Icarus-produced predictions and their uncertainties are conditioned on the confidence that the inputs come from the same distribution that the RSM was trained on. We put particular focus on exploring out-of-distribution detection, for which we conduct a broad literature review, design an intuitive evaluation procedure with three easily-visualisable toy examples, and suggest two methodological improvements. We also explore and evaluate popular prediction models and uncertainty quantifiers. We use the one-dimensional atmospheric chemistry transport model SOSAA as an example of a complex model for this thesis. We produce a dataset of model inputs and outputs from simulations of the atmospheric conditions along air parcel trajectories that arrived at the SMEAR II measurement station in Hyytiälä, Finland, in May 2018. We evaluate several prediction models and uncertainty quantification methods on this dataset and construct a proof-of-concept SOSAA RSM using the Icarus RSM architecture. The SOSAA RSM is built on pairwise-difference regression using random forests and an auto-associative out-of-distribution detector with a confidence scorer, which is trained with both the original training inputs and new synthetic out-of-distribution samples. We also design a graphical user interface to configure the SOSAA model and trial the SOSAA RSM. We provide recommendations for out-of-distribution detection, prediction models, and uncertainty quantification based on our exploration of these three systems. We also stress-test the proof-of-concept SOSAA RSM implementation to reveal its limitations for predicting model perturbation outputs and show directions for valuable future research. Finally, our experiments affirm the importance of reporting predictions alongside well-calibrated confidence scores and uncertainty levels so that the predictions can be used with confidence and certainty in scientific research applications.
  • Nummi, Vilhelmiina (2024)
    Quantum Chromodynamics (QCD) is the quantum field theory of strong interaction and therefore describes one of the fundamental forces in the universe. Quarks and gluons, together called partons, interact via strong force, and their interactions can be observed in high-energy collisions involving hadrons. Hadrons always contain some composition of quarks. The simplest way to obtain a partonic outgoing state in a collision, is through an electron-positron annihilation that produces a photon, which scatters, forming a combination of partons. Partons, unlike leptons, have properties known as color and flavor. There are six different types of quarks, all of which can be produced in a collision at sufficiently high energies. The energy involved in a hard collision of an electron and a positron is carried through the entire process. Generally, the partons’ binding to each other, known as color confinement, is so strong that they are observed as hadrons. Hadrons are color-neutral, meaning that the colored quarks are arranged such that they result in a color-neutral particle. This thesis focuses on calculating partonic collision outcomes on a small scale using perturbative QCD. At high energies and short distances, quarks are weakly coupled, allowing them to be considered relatively free from parton-parton interactions. By comparing the outcomes of electron-positron collisions, namely the electromagnetic and strong force (leptonic and partonic) outcomes, we can receive information on their differences when interacting. By constructing the process of leptonic annihilation, we can observe probabilities of charged particle outcomes that scatter from a photon. Furthermore, we calculate the cross-sections of the processes resulting in several partonic configurations. One of the results of this thesis is the ratio between the hadronic and leptonic outcomes stemming from the same initial collision. With partonic outcome cross-section calculated up to next-to-leading order, the ratio exhibits the impact of color factors as well as the running coupling of the parton-parton interaction.
  • Enckell, Anastasia (2023)
    Numerical techniques have become powerful tools for studying quantum systems. Eventually, quantum computers may enable novel ways to perform numerical simulations and conquer problems that arise in classical simulations of highly entangled matter. Simple one dimensional systems of low entanglement are efficiently simulatable on a classical computer using tensor networks. This kind of toy simulations also give us the opportunity to study the methods of quantum simulations, such as different transformation techniques and optimization algorithms that could be beneficial for the near term quantum technologies. In this thesis, we study a theoretical framework for a fermionic quantum simulation and simulate the real-time evolution of particles governed by the Gross-Neveu model in one-dimension. To simulate the Gross-Neveu model classically, we use the Matrix Product State (MPS) method. Starting from the continuum case, we discretise the model by putting it on a lattice and encode the time evolution operator with the help of fermion-to-qubit transformations, Jordan-Wigner and Bravyi-Kitaev. The simulation results are visualised as plots of probability density. The results indicate the expected flavour and spatial symmetry of the system. The comparison of the two transformations show better performance of the Jordan-Wigner transformation before and after the gate reduction.
  • Hernandez Serrano, Ainhoa (2023)
    Using quantum algorithms to carry out ML tasks is what is known as Quantum Machine Learning (QML) and the methods developed within this field have the potential to outperform their classical counterparts in solving certain learning problems. The development of the field is partly dependent on that of a functional quantum random access memory (QRAM), called for by some of the algorithms devised. Such a device would store data in a superposition and could then be queried when algorithms require it, similarly to its classical counterpart, allowing for efficient data access. Taking an axiomatic approach to QRAM, this thesis provides the main considerations, assumptions and results regarding QRAM and yields a QRAM handbook and comprehensive introduction to the literature pertaining to it.
  • Suominen, Heikki (2022)
    Quantum computers are one of the most prominent emerging technologies of the 21st century. While several practical implementations of the qubit—the elemental unit of information in quantum computers—exist, the family of superconducting qubits remains one of the most promising platforms for scaled-up quantum computers. Lately, as the limiting factor of non-error-corrected quantum computers has began to shift from the number of qubits to gate fidelity, efficient control and readout parameter optimization has become a field of significant scientific interest. Since these procedures are multibranched and difficult to automate, a great deal of effort has gone into developing associated software, and even technologies such as machine learning are making an appearance in modern programs. In this thesis, we offer an extensive theoretical backround on superconducting transmon qubits, starting from the classical models of electronic circuits, and moving towards circuit quantum electrodynamics. We consider how the qubit is controlled, how its state is read out, and how the information contained in it can become corrupted by noise. We review theoretical models for characteristic parameters such as decoherence times, and see how control pulse parameters such as amplitude and rise time affect gate fidelity. We also discuss the procedure for experimentally obtaining characteristic qubit parameters, and the optimized randomized benchmarking for immediate tune-up (ORBIT) protocol for control pulse optimization, both in theory and alongside novel experimental results. The experiments are carried out with refactored characterization software and novel ORBIT software, using the premises and resources of the Quantum Computing and Devices (QCD) group at Aalto University. The refactoring project, together with the software used for the ORBIT protocol, aims to provide the QCD group with efficient and streamlined methods for finding characteristic qubit parameters and high-fidelity control pulses. In the last parts of the thesis, we evaluate the success and shortcomings of the introduced projects, and discuss future perspectives for the software.
  • Toikka, Nico (2023)
    Particle jets are formed in high energy proton-proton collisions and then measured by particle physics experiments. These jets, initiated by the splitting and hadronization of color charged quarks and gluons, serve as important signatures of the strong force and provide a view to size scales smaller than the size of an atom. So, understanding jets, their behaviour and structure, is a path to understanding one of the four fundamental forces in the known universe. But, it is not only the strong force that is of interest. Studies of Standard Model physics and beyond Standard Model physics require a precise measurement of the energies of final state particles, represented often as jets, to understand our existing theories, to search for new physics hidden among our current experiments and to directly probe for the new physics. As experimentally reconstructed objects the measured jets require calibration. At the CMS experiment the jets are calibrated to the particle level jet energy scale and their resolution is determined to achieve the experimental goals of precision and understanding. During the many-step process of calibration, the position, energy and structure of the jets' are taken into account to provide the most accurate calibration possible. It is also of great importance, whether the jet is initiated by a gluon or a quark, as this affects the jets structure, distribution of energy among its constituents and the number of constituents. These differences cause disparities when calibrating the jets. Understanding of jets at the theory level is also important for simulation, which is utilized heavily during calibration and represents our current theoretical understanding of particle physics. This thesis presents a measurement of the relative response between light quark (up, down and strange) and gluon jets from the data of CMS experiment measured during 2018. The relative response is a measure of calibration between the objects and helps to show where the difference of quark and gluon jets is the largest. The discrimination between light quarks and gluons is performed with machine learning tools, and the relative response is compared at multiple stages of reconstruction to see how different effects affect the response. The dijet sample that is used in this study provides a full view of the phase space in pT and |eta|, with analysis covering both quark and gluon dominated regions of the space. These studies can then be continued with similar investigations of other samples, with the possibility of using the combined results as part of the calibration chain.
  • Paloranta, Matias Mikko Aleksi (2023)
    Low-frequency $1/$ noise is ubiquitous, found in all electronic devices and other diverse areas such as as music, economics and biological systems. Despite valiant efforts, the source of $1/f$ noise remains one of the oldest unsolved mysteries in modern physics after nearly 100 years since its initial discovery in 1925. In metallic conductors resistance $1/f$ noise is commonly attributed to diffusion of mobile defects that alter the scattering cross section experienced by the charge carriers. Models based on two-level tunneling systems (TLTS) are typically employed. However, a model based on the dynamics of mobile defects forming temporary clusters would naturally offer long-term correlations required by $1/f$ noise via the nearly limitless number of configurations among a group of defects. Resistance $1/f$ noise due to such motion of mobile defects was studied via Monte Carlo simulations of a simple resistor network resembling an atomic lattice. The defects migrate through the lattice via thermally activated hopping motion, causing fluctuations in the resistance due to varying scattering cross section. The power spectral density (PSD) $S(f)$ of the simulated resistance noise was then calculated and first compared to $S(f)=C/f^\alpha$ noise, where $C$ is a constant and $\alpha$ is ideally close to unity. The value of $\alpha$ was estimated via a linear fit of the noise PSD on a log-log scale. The resistor network was simulated with varying values of temperature, system size and the concentration of defects. The noise produced by the simulations did not yield pure $1/f^\alpha$ noise, instead the lowest frequencies displayed a white noise tail, changing to $1/f^\alpha$ noise between $10^{-4}$ to $10^{-2}$~Hz. In this way the spectrum of the simulated noise resembles a Lorentzian. The value of $\alpha$ was found to be the most sensitive to temperature $T$, which directly affects the motion of the defects. At high $T$ the value of $\alpha$ was closer to 1, whereas at low $T$ it was closer to $1,5$. Varying the size of the system was found to have little impact on $\alpha$ when the temperature and concentration of defects were kept fixed. Increasing the number of defects was found to have slightly more effect on $\alpha$ when the temperature and system size were kept fixed. The value of $\alpha$ was closer to unity when the concentration of defects was higher, but the effect was not nearly as pronounced compared to varying the temperature. In addition, the simulated noise was compared to a PSD of the form $S(f)\propto e^{-\sqrt{N}/T}1/f$, where $N$ is the size of the system, according to recent theoretical proceedings. The $1/f^\alpha$ part of the simulated noise was found to roughly follow the above equation, but the results remain inconclusive. Although the simple toy model did not produce pure $1/f^\alpha$ noise, the dynamics of the mobile defects do seem to have an effect on the noise PSD, yielding noise closer to $1/f$ when there are more interactions between the defects due to either higher mobility or higher concentration of defects. However, this is disregarding the white noise tail. Recent experimental research on high quality graphene employing more rigorous kinetic Monte Carlo simulations have displayed more promising results. This indicates that the dynamics of temporary cluster formation of mobile defects is relevant to understand $1/f$ noise in metallic conductors, offering an objective for future work.
  • Heinonen, Arvo Arnoldas (2021)
    The goal of this work is to describe sheaves as an alternative to fiber bundles in geometric prequantization. We briefly go over geometric quantization of Euclidean space and make a connection with canonical quantization. After this, we look at the connections between covers of a topological space, Grothendieck topologies, and systems of local epimorphisms. Finally, we use these concepts to define sheaves and show how they can be used in prequantization in place of the more traditional fiber bundles to ensure the consistency of locally defined quantities.
  • Poltto, Lotta (2024)
    Despite the continuous efforts to unveil the true nature of Dark Matter (DM), it still remains as a mystery. In this thesis we propose one model that can produce the correct relic abundance of DM in the current Universe, while fitting into the existing experimentally obtained constraints. In this model we add a singlet fermion, which is a not completely sterile right-handed neutrino, and a heavy real scalar singlet into the Standard Model of Particle Physics (SM) and carry out the relic density calculations. The DM candidate here is the singlet fermion, which acts as a thermally decoupled Weakly Interacting Massive Particle. Theoretical framework is laid out in detail. Special attention is given to obtaining the definition for relic abundance from Lee-Weinberg equation in terms of yield, and the decoupling temperature. It is found, that the usual way of handling the thermally averaged cross section appearing in these definitions is not suitable in this case. In fact, the usual approximations can only be done when thermally averaged cross section is almost linear in $s$, and this is a demand that very few models can satisfy. The correct way to treat the cross section by taking the expansion in terms of the relative velocity is presented with careful attention to detail. This methodology is then applied to the extension of the SM we introduced. Only tree-level processes are being considered. Cross sections are calculated for each possible process to obtain the total cross section needed for the DM relic density calculations. We present how the different free parameters in the theory affect the relic abundance and what masses are allowed for the right-handed neutrino to obtain. It is found out that the parameters in this model are heavily constrained. Yet the model is able to fit into the constraints obtained from branching ratio and direct detection (DD) experiments, while producing the correct relic density. This is true when the mixing angle $\theta$ is of the order $1 \times 10^{-4}$, and right-handed neutrino has the mass of exactly half of the mass of the heavy scalar or higher than the mass of the heavy scalar. It is proposed that allowing lepton mixing and adding a separate mass term for fermion in the model could make the model less restricted. Investigating this would be interesting thing to do in the future. However, the proposed DM candidate remains viable and the upcoming DD experiments will relatively soon reveal if the singlet fermion is the DM particle we are seeking.
  • Ruosteoja, Tomi (2024)
    Astronomical observations suggest that the cores of neutron stars may have high enough baryon densities to contain quark matter (QM). The unavailability of lattice field theory makes perturbative computations important for understanding the thermodynamics of quantum chromodynamics at high baryon densities. The gluon self-energy contributes to many transport coefficients in QM, such as thermal conductivity and shear viscosity, through quark-quark scattering processes. The contribution of short-wavelength virtual gluons to the self-energy of soft, long-wavelength gluons, known as the hard-thermal-loop (HTL) self-energy, was recently computed at next-to-leading order (NLO) in perturbation theory. In this thesis, we complete the evaluation of the NLO self-energy of soft gluons in cold QM by computing the contribution from soft virtual gluons, known as the one-loop HTL-resummed self- energy. We use HTL effective theory, a reorganization of the perturbative series for soft gluons, within the imaginary-time formulation of thermal field theory. We present the result in terms of elementary functions and finite integrals that can be evaluated numerically. We show explicitly that the NLO self-energy is finite and gauge dependent. The NLO self-energy could be used to compute corrections to transport coefficients in cold QM, which may be relevant for neutron-star applications.
  • Lindblad, Victor (2022)
    This thesis is aimed to explore the topic of surface diffusion on copper and iron surfaces, using an accelerated molecular dynamics (MD) method known as collective variable-driven hyperdynamics (CVHD). The thesis is divided into six main sections: Introduction, Theory, Methods, Simulations, Results and Conclusion. The introduction briefly explains the main interest behind the topic and why diffusion is a difficult subject for classical MD simulations. In the theory section, the physical description of diffusion in metals is explained, as well as the important quantities that can be determined from these types of simulations. The following section dives into the basics concerning the molecular dynamics simulations method. It also gives a description of the theoretical basis of collective variable-driven hyperdynamics and how it is implemented alongside molecular dynamics. The simulations section more technically explains the system building methodology, discusses key parameters and gives reasoning for the chosen values of these parameters. Since, both copper and iron systems have been simulated, both sets of systems are explained independently. The results section displays the results for the copper and iron systems separately. In both sets of systems, the obtained activation energy of the dominant diffusion mechanisms remain the main point of focus. Lastly, the results are dissected and summarized.
  • Vaaranta, Antti (2022)
    One of the main ways of physically realizing quantum bits for the purposes of quantum technology is to manufacture them as superconducting circuits. These qubits are artificially built two-level systems that act as carriers of quantum information. They come in a variety of types but one of the most common in use is the transmon qubit. The transmon is a more stable, improved version of the earlier types of superconducting qubits with longer coherence times. The qubit cannot function properly on its own, as it needs other circuit elements around it for control and readout of its state. Thus the qubit is only a small part of a larger superconducting circuit interacting with the qubit. Understanding this interaction, where it comes from and how it can be modified to our liking, allows researchers to design better quantum circuits and to improve the existing ones. Understanding how the noise, travelling through the qubit drive lines to the chip, affects the time evolution of the qubit is especially important. Reducing the amount of noise leads to longer coherence times but it is also possible to engineer the noise to our advantage to uncover novel ways of quantum control. In this thesis the effects of a variable temperature noise source on the qubit drive line is studied. A theoretical model describing the time evolution of the quantum state is built. The model starts from the basic elements of the quantum circuit and leads to a master equation describing the qubit dynamics. This allows us to understand how the different choices made in the manufacturing process of the quantum circuit affect the time evolution. As a proof of concept, the model is solved numerically using QuTiP in the specific case of a fixed-frequency, dispersive transmon qubit. The solution shows a decohering qubit with no dissipation. The model is also solved in a temperature range 0K < T ≤ 1K to show how the decoherence times behave with respect to the temperature of the noise source.
  • Rosenberg, Otto (2023)
    Bayesian networks (BN) are models that map the mutual dependencies and independencies between a set of variables. The structure of the model can be represented as a directed acyclic graph (DAG), which is a graph where the nodes represent variables and the directed edges between variables represent a dependency. BNs can be either constructed by using knowledge of the system or derived computationally from observational data. Traditionally, BN structure discovery from observational data has been done through heuristic algorithms, but advances in deep learning have made it possible to train neural networks for this task in a supervised manner. This thesis provides an overview of BN structure discovery and discusses the strengths and weaknesses of the emerging supervised paradigm. One supervised method, the EQ-model, that uses neural networks for structure discovery using equivariant models, is also explored in further detail with empirical tests. Through a process of hyperparameter optimisation and moving to online training, the performance of the EQ-model is increased. The EQ-model is still observed to underperform in comparison to a competing score-based model, NOTEARS, but offers convenient features, such as dramatically faster runtime, that compensate for the reduced performance. Several interesting lines of further study that could be used to further improve the performance of the EQ-model are also identified.
  • Prasad, Ayush (2024)
    Machine learning is increasingly being applied to model molecular data in various scientific fields such as drug discovery, materials science, and atmospheric science. However, the high dimensionality that molecular features present causes challenges when applying machine learning algorithms directly. Dimensionality reduction methods can help reduce the feature space and create new in- formative features. In this thesis, we first review current methods for representing molecules for machine learning. We then discuss the importance of evaluating dimensionality reduction visualizations; and review and propose metrics for it. We present Gradient Boosting Mapping (GBMAP), a supervised dimensionality reduction method. Through experiments on benchmark datasets and the GeckoQ molecular dataset, we demonstrate that low-dimensional embeddings created by GBMAP can be used as features to improve the performance of simpler interpretable machine learning models significantly.