Browsing by master's degree program "Master 's Programme in Theoretical and Computational Methods"
Now showing items 120 of 32

(2023)Simulating space plasma on a global scale is computationally demanding due to the system size involved. Modeling regions with variable resolution depending on physical behavior can save computational resources without compromising too much on simulation accuracy. This thesis examines adaptive mesh refinement as a method of optimizing Vlasiator, a global hybridVlasov plasma simulation. Behavior of plasma near the Earth's magnetosphere and different characteristic scales that need to be considered in simulation are introduced. Kinetic models using statistical methods and fluid methods are examined. Modeling electrons kinetically requires resolutions orders of magnitude finer than ions, so in Vlasiator ions are modeled kinetically and electrons as a fluid. This allows for lighter simulation while preserving some kinetic effects. Mesh refinement used in Vlasiator is introduced as a method to save memory and computational work. Due to the structure of the magnetosphere, resolution isn't uniform in the simulation domain, with particularly the tail regions and magnetopause having rapid spatial changes compared to the relatively uniform solar wind. The region to refine is parametrized and static throughout a simulation run. Adaptive mesh refinement based on the simulation data is introduced as an evolution of this method. This provides several benefits: more rigorous optimization of refinement regions, easier reparametrization for different conditions, following dynamic structures and saving computation time in initialization. Refinement is done based on two indices measuring the spatial rate of change of relevant variables and reconnection respectively. The grid is rerefined at set intervals as the simulation runs. Tests similar to production runs show adaptive refinement to be an efficient replacement for static refinement. Refinement parameters produce results similar to the static method, while giving somewhat different refinement regions. Performance is in line with static refinement, and refinement overhead is minor. Further avenues of development are presented, including dynamic refinement intervals.

Applying fluctuations to simulations of early universe bubble collisions in O(N) scalar field theory (2023)Many beyond the Standard Model theories include a first order phase transition in the early universe. A phase transition of this kind is presumed to be able to source gravitational waves that might be be observed with future detectors, such as the Laser Interferometer Space Antenna. A first order phase transition from a symmetric (metastable) minimum to the broken (stable) one causes the nucleation of broken phase bubbles. These bubbles expand and then collide. It is of importance to examine how the bubbles collide in depth, as the events during the collision affect the gravitational wave spectrum. We assume the field to interact very weakly or not at all with the particle fluid in the early universe. The universe also experiences fluctuations due to thermal or quantum effects. We look into how these background fluctuations affect the field evolution and bubble collisions during the phase transition in O(N) scalar field theory. Specifically, we numerically simulate two colliding bubbles nucleated on top of the background fluctuations, with the field being a Ndimensional vector in the O(N) group. Due to the symmetries present, the system can be examined in cylindrical coordinates, lowering the number of simulated spatial dimensions. In this thesis, we perform the calculation of initial state fluctuations and simulate them and two bubbles numerically. We present results of the simulation of the field, concentrating on the effects of fluctuations on the O(N) scalar field theory.

(2023)In this thesis a computation of the nonperturbative Lorentzian graviton propagator, which has appeared in the literature, is outlined. Firstly, the necessary ingredients for the computation are introduced and discussed. This includes; General Relativity (GR), its path integral quantisation around a Minkowski space background, and the definition of the graviton propagator along with its relation to the oneparticleirreducible (1PI) graviton 2point function. A brief discussion on the perturbative nonrenormalizability of the theory is followed by the introduction of the functional renormalization group (fRG) equation from which a fRG equation for the scalar coefficient function of the transversetraceless (TT) 1PI graviton 2point function is derived. After these ingredients have been introduced we proceed to outline the computation in question, skipping the details of its most involved steps. The computation starts by defining the spectral function and the KällénLehmann spectral representation of propagators. The nonperturbative TT 1PI graviton 2point function, the propagators and the spectral functions, are parameterized and the fRG flow equation for the TT 1PI graviton 2point function is used together with certain renormalization conditions to define renormalization group (RG) flow equations for these parameters. The solution of the flow of the parameters is displayed and is used to construct the graviton spectral function and the graviton propagator, which are both displayed graphically. Finally, a discussion of the features of the spectral function and propagator are given, and these results are briefly discussed in the context of the asymptotic safety program for quantum gravity and some of its open issues.

(2022)Sumproduct networks (SPN) are graphical models capable of handling large amount of multi dimensional data. Unlike many other graphical models, SPNs are tractable if certain structural requirements are fulfilled; a model is called tractable if probabilistic inference can be performed in a polynomial time with respect to the size of the model. The learning of SPNs can be separated into two modes, parameter and structure learning. Many earlier approaches to SPN learning have treated the two modes as separate, but it has been found that by alternating between these two modes, good results can be achieved. One example of this kind of algorithm was presented by Trapp et al. in an article Bayesian Learning of SumProduct Networks (NeurIPS, 2019). This thesis discusses SPNs and a Bayesian learning algorithm developed based on the earlier men tioned algorithm, differing in some of the used methods. The algorithm by Trapp et al. uses Gibbs sampling in the parameter learning phase, whereas here MetropolisHasting MCMC is used. The algorithm developed for this thesis was used in two experiments, with a small and simple SPN and with a larger and more complex SPN. Also, the effect of the data set size and the complexity of the data was explored. The results were compared to the results got from running the original algorithm developed by Trapp et al. The results show that having more data in the learning phase makes the results more accurate as it is easier for the model to spot patterns from a larger set of data. It was also shown that the model was able to learn the parameters in the experiments if the data were simple enough, in other words, if the dimensions of the data contained only one distribution per dimension. In the case of more complex data, where there were multiple distributions per dimension, the struggle of the computation was seen from the results.

(2022)We study the properties of flat band states of bosons and their potential for alloptical switching. Flat bands are dispersionless energy bands found in certain lattice structures. The corresponding eigenstates, called flat band states, have the unique property of being localized to a small region of the lattice. High sensitivity of flat band lattices to the effects of interactions could make them suitable for fast, energy efficient switching. We use the BoseHubbard model and computational methods to study multiboson systems by simulating the timeevolution of the particle states and computing the particle currents. As the systems were small, fewer than ten bosons, the results could be computed exactly. This was done by solving the eigenstates of the system Hamiltonian using exact diagonalization. We focus on a finitelength sawtooth lattice, first simulating weakly interacting bosons initially in a flat band state. Particle current is shown to typically increase linearly with interaction strength. However, finetuning the hopping amplitudes and boundary potentials, particle current through the lattice is highly suppressed. We use this property to construct a switch which is turned on by pumping the input with control photons. Inclusion of particle interactions disrupts the system, resulting in a large nonlinear increase in particle current. We find that certain flat band lattices could be used as medium for an optical switch capable of controlling the transport of individual photons. In practice, highly optically nonlinear materials are required to reduce the switching time which is found to be inversely proportional to the interaction strength.

(2022)We study a system of cold highdensity matter consisting purely of quarks and gluons. The mathematical construction of Quantum Chromodynamics (QCD) introduces interactions between the fields, which modify the thermodynamic properties of the system. In the presence of interactions, we can not solve the thermodynamic properties of the system analytically. The method is to expand the result in a series in terms of the QCD coupling constant. This is referred to as the perturbation theory in the context of thermal field theory (TFT). The coupling constant describes the strength of the interaction. We introduce the basic calculation methods used in the QCD and the TFTs in general. We will also include the chemical potential associated with the number of quarks in the system in the calculation. In the case of zero temperature, quarks form a Fermisphere such that energy states lower than the chemical potential will be Pauli blocked. The resulting fermionic momentum integrals are modified as a consequence. We can split these integrals into two parts, referred to as the vacuum and matter parts. We can split the calculation of the pressure into two distinct contributions: one from skeleton diagrams and one from ring diagrams. The ring diagrams have unphysical IR divergences that we can not cancel using the counterterms. This is why hard thermal loop (HTL) effective field theory (EFT) is introduced. We will discuss this HTL framework, which requires the computation of the matter part of the gluon polarization tensor, which we will also evaluate in this thesis.

(2023)Certain topological phases of matter exhibit lowenergy quasiparticles that closely resemble relativistic Weyl fermions due to their linear dispersion. This notion leads to a quasirelativistic description for these nonrelativistic condensed matter quasiparticles. In relativistic quantum field theory, Weyl fermions are subject to chiral anomalies when coupled to gauge fields or nontrivial background geometries. Condensed matter Weyl quasiparticles similarly experience anomalies from their background fields, leading to anomalous transport phenomena. We review the field theory of relativistic fermions in curved spacetimes with torsion, and the macroscopic BCS theory of superconductors and superfluids. Using the example of p+ippaired superfluids and superconductors, we show how their gapless excitations are quasirelativistic Weyl fermions in an emergent spacetime determined by their background fields. With a simple Landau level argument, we then argue that the presence of torsion in this emergent spacetime leads to a chiral anomaly for the Weyl quasiparticles. In the context of relativistic theory, the torsional contribution to the chiral anomaly is controversial, not least because it depends on a nonuniversal UV cutoff. The Landau level calculation presented here is also ambiguous for relativistic Weyl fermions. However, as we will show, the quasirelativistic approximation we use and the properties of the underlying superfluid or superconductor lead to a natural cutoff for the quasiparticle anomaly. We match this emergent torsional anomaly to the hydrodynamic anomaly in the p+ipsuperfluid 3HeA.

(2022)Topological defects and solitons are nontrivial topological structures that can manifest as robust, nontrivial configurations of a physical field, and appear in many branches of physics, including condensed matter physics, quantum computing, and particle physics. A fruitful testbed for experimenting with these fascinating structures is provided by dilute Bose–Einstein condensates. Bose–Einstein condensation was first predicted in 1925, and Bose–Einstein condensation was finally achieved in a dilute atomic gas for the first time in 1995 in a breakthrough experiment. Since then, the study of Bose–Einstein condensates has expanded to the study of a variety of nontrivial topological structures in condensates of various atomic species. Bose–Einstein condensates with internal spin degrees of freedom may accommodate an especially rich variety of topological structures. Spinor condensates realized in optically trapped ultracold alkali atom gases can be conveniently controlled by external fields and afford an accurate meanfield description. In this thesis, we study the creation and evolution of a monopoleantimonopole pair in such a spin1 Bose–Einstein condensate by numerically solving the Gross–Pitaevskii equation. The creation of Dirac monopoleantimonopole pairs in a spin1 Bose–Einstein condensate was numerically demonstrated and a method for their creation was proposed in an earlier study. Our numerical results demonstrate that the proposed creation method can be used to create a pair of isolated monopoles with opposite topological charges in a spin1 Bose–Einstein condensate. We found that the monopoleantimonopole pair created in the polar phase of the spin1 condensate is unstable against decay into a pair of Alice rings with oscillating radii. As a result of a rapid polartoferromagnetic transition, these Alice rings were observed to decay by expanding on a short timescale.

(2023)The QCD axion arises as a necessary consequence of the popular PecceiQuinn solution to the strong CP problem in particle physics. The axion turns out to very naturally possesses all the usual qualities of a good dark matter (DM) candidate. Having the potential to solve two major problems in particle cosmology in one fell swoop makes the axion a very attractive prospect. In recent years, the weakening of the traditional WIMP dark matter paradigm and axion search experiments just beginning to reach the sensitivities required to look for the QCD axion have further increased interest in axion physics. In this thesis, the basics of axion physics are reviewed, and an indepth exposition of common direct detection experiments and astrophysical and laboratory limits is given. Particular emphasis is placed on direct detection by using the axionphoton coupling as it is the only coupling in which experimental sensitivity is enough to probe the QCD axion. The benchmark experiments of lightshiningthroughwall (LSTW), helioscopes and cavity haloscopes are given a thorough theoretical treatment. Other couplings and related experiments are relevant when looking for axionlike particles (ALPs), which are postulated by various extensions of the Standard Model but which do not solve the strong CP problem. A general overview of the prevalent ALPsearches is given. Most of the described experimental setups, with some exceptions, are actually searches for very general weakly interacting particles, WISPs, with a certain coupling. The searches are thus well motivated regardless of the future standing of the QCD axion. A chapter is dedicated to axion dark matter and its creation mechanisms, in particular the misalignment mechanism. Two scenarios are mapped out, depending on whether the PecceiQuinn symmetry spontaneously breaks before or after inflation. Both cases have experimental implications, which are compared. These considerations motivate an axion dark matter window which should be prioritized by experiments. A significant part of this thesis is dedicated to mapping out the experimental landscape of axions today. The uptodate astrophysical and laboratory limits on the most prominent axion couplings along with projections of some nearfuture experiments are compiled into a set of exclusion plots.

(2022)We will review techniques of perturbative thermal quantum chromodynamics (QCD) in the imaginarytime formalism (ITF). The Infrared (IR)problems arising from the perturbative treatment of equilibrium thermodynamics of QCD and their phenomenological causes will be investigated in detail. We will also discuss the construction of two effective field theory (EFT) frameworks most often used in modern high precision calculations to overcome these. The EFTs are the dimensionally reduced theories EQCD and MQCD and Hard thermal loop effective theory (HTL). EQCD is threedimensional Euclidean YangMills theory coupled to an adjoint scalar field and MQCD is threedimensional Euclidean pure YangMills theory. The effective parameters in these theories are determined through matching calculations. HTL is based on resummation of hard thermal loops and uses effective propagators and vertex functions. We will also discuss the determination of the pressure of QCD perturbatively. In general, this thesis details calculations and the methodology.

(2022)I discuss recent work regarding electronic structure calculations on quantum computers. I introduce quantum computing and electronic structure theory, and then discuss different mappings from electrons and excitation operators, to qubits and unitary operators, mainly Jordan–Wigner and Bravyi–Kitaev. I discuss adiabatic quantum computing in connection to state preparation on quantum computers. I introduce the most important algorithms in the field, namely, quantum phase estimation (QPE) and variational quantum eigensolver (VQE). I also mention recent modifications and improvements to these algorithms. Then I take a detour to discuss noise and quantum operations, a model for understanding how quantum computations fail because of noise from the environment. Because of this noise, quantum simulators have risen as a tool for understanding quantum computers and I have used such simulators to do electronic structure calculations on small atoms. The algorithm I have used, QPE, yields the exact result within the employed basis. As a basis I use numerical orbitals, which are very robust due to their flexibility.

(2023)The Smoluchowski coagulation equation is considered to be one of the most fundamental equations of the classical description of matter alongside with the Boltzman, NavierStokes and Euler equations. It has applications from physical chemistry to astronomy. In this thesis, a new existence result of measure valued solutions to the coagulation equation is proven. The proven existence result is stronger and more general than a previously claimed result. The proven result holds for a generic class of coagulation kernels, including various kernels used in applications. The coagulation equation models binary coagulation of objects characterized by a strictly positive real number called size, which often represents mass or volume in applications. In binary coagulation, two objects can merge together with a rate characterized by the socalled coagulation kernel. Time evolution of the size distribution is given by the coagulation equation. Traditionally the coagulation equation has two forms, discrete and continuous, which are referring to whether the objects sizes can take discrete or continuous values. A similar existence result to the one proven in this thesis has been obtained for the continuous coagulation equation, while the discrete coagulation equation is often favored in applications. Being able to study both discrete and continuous systems and their mixtures at the same time has motivated the study of measure valued solutions to the coagulation equation. After motivating the existence result proven in this thesis, its proof is organized into four Steps described at the end of the introduction. The needed mathematical tools and their connection to the four Steps are presented in chapter 2. The precise mathematical statement of the existence result is given in chapter 3 together with Step 1, where the coagulation equation will be regularized using a parameter ε ∈ (0, 1) into a more manageable regularized coagulation equation. Step 2 is done in chapter 4 and it consists of proving existence and uniqueness of a solution f_ε for each regularized coagulation equation. Step 3 and Step 4 are done in chapter 5. In Step 3, it will be proven that the regularized solutions {f_ε} have a converging subsequence in the topology of uniform convergence on compact sets. Step 4 finishes the existence proof by verifying that the subsequence’s limit satisfies the original coagulation equation. Possible improvements and future work are outlined in chapter 6.

(2022)The variational quantum eigensolver (VQE) is one of the most promising proposals for a hybrid quantumclassical algorithm made to take advantage of nearterm quantum computers. With the VQE it is possible to find ground state properties of various of molecules, a task which many classical algorithms have been developed for, but either become too inaccurate or too resourceintensive especially for so called strongly correlated problems. The advantage of the VQE comes in the ability of a quantum computer to represent a complex system with fewer socalled qubits than a classical computer would with bits, thus making the simulation of large molecules possible. One of the major bottlenecks for the VQE to become viable for simulating large molecules however, is the scaling of the number of measurements necessary to estimate expectation values of operators. Numerous solutions have been proposed including the use of adaptive informationally complete positive operatorvalued measures (ICPOVMs) by GarcíaPérez et al. (2021). Adaptive ICPOVMs have shown to improve the precision of estimations of expectation values on quantum computers with better scaling in the number of measurements compared to existing methods. The use of these adaptive ICPOVMs in a VQE allows for more precise energy estimations and additional expectation value estimations of separate operators without any further overhead on the quantum computer. We show that this approach improves upon existing measurement schemes and adds a layer of flexibility, as ICPOVMs represent a form of generalized measurements. In addition to a naive implementation of using ICPOVMs as part of the energy estimations in the VQE, we propose techniques to reduce the number of measurements by adapting the number of measurements necessary for a given energy estimation or through the estimation of the operator variance for a Hamiltonian. We present results for simulations using the former technique, showing that we are able to reduce the number of measurements while retaining the improvement in the measurement precision obtained from ICPOVMs.

(2023)Many extensions to the Standard Model of particle physics feature a firstorder phase transition in the very early universe. This kind of a phase transition would source gravitational waves through the collision of nucleation bubbles. These in turn could be detected e.g. with the future spacebased gravitational wave observatory LISA (Laser Interferometer Space Antenna). Cosmic strings, on the other hand, are linelike topological defects. In this work, we focus on global strings arising from the spontaneous breakdown of a global symmetry. One example of global strings are axionic strings, which are a popular research topic, owing to the role of the axion as a potential dark matter candidate and a solution to the strong CP problem. In this work, our aim is to combine these two sets of earlyuniverse phenomena. We investigate the possibility of creating global strings through the bubble collisions of a firstorder phase transition. We use a simplified model with a twocomponent scalar field to nucleate the bubbles and simulate their expansion, obtaining a shortlived network of global strings in the process. We present results for string lifetime, mean string separations corresponding to different mean bubble separations, and gravitational wave spectra.

(2021)We determine the leading thermal contributions to various selfenergies in finitetemperature and density quantum chromodynamics (QCD). The socalled hard thermal loop (HTL) selfenergies are calculated for the quark and gluon fields at oneloop order and for the photon field at twoloop order using the realtime formulation of thermal field theory. Inmedium screening effects arising at long wavelengths necessitate the reorganization of perturbative series of thermodynamic quantities. Our results may be directly applied in a reorganization called the HTL resummation, which applies an effective theory for the longwavelength modes in the medium. The photonic result provides a partial nexttoleading order correction to the current leadingorder result and can be later extended to pure QCD with the techniques we develop. The thesis is organized as follows. First, by considering a complex scalar field, we review the main aspects of the equilibrium realtime formalism to build a solid foundation for our thermal field theoretic calculations. Then, these concepts are generalized to QCD, and the properties of the QCD selfenergies are thoroughly studied. We discuss the longwavelength collective behavior of thermal QCD and introduce the HTL theory, outlining also the main motivations for our calculations. The explicit computations of selfenergies are presented in extensive detail to highlight the computational techniques we employ.

(2022)Flares are short, highenergy magnetic events on stars, including the Sun. Observations of young stars and red dwarfs regularly show the occurrence of flare events multiple orders of magnitude more energetic than even the fiercest solar storms ever recorded. As our technology remains vulnerable to disruptions due to space weather, the study of flares and other stellar magnetic activity is crucial. Until recently, the detection of extrasolar flares has required much manual work and observation resources. This work presents a mostly automatic pipeline to detect and estimate the energies of extrasolar flare events from optical light curves. To model and remove the star's background radiation in spite of complex periodicity, short windows of nonlinear support vector regression are used to form a multimodel consensus. Outliers above the background are flagged as likely flare events, and a template model is fitted to the flux residual to estimate the energy. This approach is tested on light curves collected from the stars AB Doradus and EK Draconis by the Transiting Exoplanet Survey Satellite, and dozens of flare events are found. The results are consistent with recent literature, and the method is generalizable for further observations with different telescopes and different stars. Challenges remain regarding edge cases, uncertainties, and reliance on user input.

(2023)Quantum Monte Carlo (QMC) is an accurate but computationally expensive technique for simulating the electronic structure of solids, with its use as a simulation technique for modelling positron states and annihilation in solids relatively new. These simulations can support positron annihilation spectroscopy and help with defect characterisation in solids and vacancy identification by calculating the positron lifetime with increased accuracy and comparing them to experimental results. One method of reducing the computational cost of simulations whilst maintaining chemical accuracy is to employ pseudopotentials. Pseudopotentials are a method to approximate the interactions between the outer valence electrons of an atom and the inner core electrons, which are difficult to model. By replacing the core electrons of an atom with an effective potential, a level of accuracy can be maintained whilst reducing the computational cost. This work extends existing research with a new set of pseudopotentials with fewer core electrons replaced by an effective potential, leading to an increase in the number of core electrons in the simulation. With the inclusion of additional core electrons into the simulation, the corrections that need to be made to the positron lifetime may not be needed. Silicon is chosen as the element under study as the high electron count makes it difficult to model accurately for positron simulations. The suitability of these new pseudopotentials for QMC is shown by calculating the cohesive and relaxation energy with comparisons made to previously used pseudopotentials. The positron lifetime is calculated from QMC simulations and compared against experimental and theoretical values. The simulation method and challenges due to the inclusion of more core electrons are presented and discussed. The results show that these pseudopotentials are suitable for use in QMC studies, including positron lifetime studies. With the inclusion of more core electrons into the simulation a positron lifetime was calculated with similar accuracy to previous studies, without the need for corrections, proving the validity of the pseudopotentials for use in positron studies. The validation of these pseudopotentials enables future theoretical studies to better capture the annihilation characteristics in cases where core electrons are important. In achieving these results, it was found that energy minimisation rather than variance minimisation was needed for optimising the wavefunction with these pseudopotentials.

(2023)The purpose of this work is to investigate the scaling of ’t HooftPolyakov monopoles in the early universe. These monopoles are a general prediction of a grand unified theory phase transition in the early universe. Understanding the behavior of monopoles in the early universe is thus important. We tentatively find a scaling for monopole separation which predicts that the fraction of the universe’s energy in monopoles remains constant in the radiation era, regardless of initial monopole density. We perform lattice simulations on an expanding lattice with a cosmological background. We use the simplest fields which produce ’t HooftPolyakov monopoles, namely the SU(2) gauge fields and a Higgs field in the adjoint representation. We initialize the fields such that we can control the initial monopole density. At the beginning of the simulations, a damping phase is performed to suppress nonphysical fluctuations in the fields, which are remnants from the initialization. The fields are then evolved according to the discretized field equations. Among other things, the number of monopoles is counted periodically during the simulation. To extend the dynamical range of the runs, the PressSpergelRyden method is used to first grow the monopole size before the main evolution phase. There are different ways to estimate the average separation between monopoles in a monopole network, as well as to estimate the root mean square velocity of the monopoles. We use these estimators to find out how the average separation and velocity evolve during the runs. To find the scaling solution of the system, we fit the separation estimate on a function of conformal time. This way we find that the average separation ξ depends on conformal time η as ξ ∝ η^(1/3) , which indicates that the monopole density scales in conformal time the same way as the critical energy density of the universe. We additionally find that the velocity measured with the velocity estimators depends on the separation as approximately v ∝ dξ/dη. It’s been shown that a possible grand unified phase transition would produce an abundance of ’t HooftPolyakov monopoles and that some of these would survive to the present day and begin to dominate the energy density of the universe. Our result seemingly disagrees with this prediction, though there are several reasons why the predictions might not be compatible with the model we simulate. For one, in our model the monopoles do not move with thermal velocities, unlike what most of the predictions assume happens in the early universe. Thus future work of simulations with thermal velocities added would be needed. Additionally we ran simulations only in the radiation dominated era of the universe. During the matter domination era, the monopoles might behave differently.

(2023)Ga2O3 has been found to exhibit excellent radiation hardness properties, making it an ideal candidate for use in a variety of applications that involve exposure to ionizing radiation, such as in space exploration, nuclear power generation, and medical imaging. Understanding the behaviour of Ga2O3 under irradiation is therefore crucial for optimizing its performance in these applications and ensuring their safe and efficient operation. There are five commonly identified polymorphs of Ga2O3 , namely, β, α, γ, δ and structures, among these phases, βGa2O3 is the most stable crystal structure and has attracted majority of the recent attention. In this thesis, we used molecular dynamic simulations with the newly developed machine learned Gaussian approximation potentials to investigate the radiation damage in βGa2O3 . We inspected the gradual structural change in βGa2O3 lattice with increase doses of Frenkel pairs implantations. The results revealed that OFrenkel pairs have a strong tendency to recombine and return to their original sublattice sites. When Ga and OFrenkel pairs are implanted to the same cell, the crystal structure was damaged and converted to an amorphous phase at low doses. However, the accumulation of pure GaFrenkel pairs in the simulation cells might induce a transition of β to γGa, while O sublattice remains FCC crystal structure, which theoretically demonstrated the recent experiments finding that β Ga2O3 transfers to the γ phase following ion implantation. To gain a better understanding of the natural behaviour of βGa2O3 under irradiation, we utilized collision cascade simulations. The results revealed that O sublattice in the βGa2O3 lattice is robust and less susceptible to damage, despite O atoms having higher mobility. The collision and recrystallization process resulted in a greater accumulation of Ga defects than O defects, regardless of PKA atom type. These further revealed that displaced Ga ion hard to recombine to β Ga lattice, while the FCC stacking of the O sublattice has very strong tendency to recovery. Our theoretical models on the radiation damage of βGa2O3 provide insight into the mechanisms underlying defect generation and recovery during experiment ion implantation, which has significant implications for improving Ga2O3 radiation tolerance, as well as optimizing its electronic and optical properties.

(2022)This thesis reviews stateoftheart topdown holographic methods used for modeling dense matter in neutron stars. This is done with the help of the WittenSakaiSugimoto (WSS) model, which attempts to construct a holographic version of quantum chromodynamics (QCD) to mimic its features. As a starting chapter, string theory is reviewed in a quick fashion for the reader to understand some of the (historical) developments behind this construction. Bosonic and superstrings are reviewed along conformal field theory, and focus is put on Dpbranes and compactifications of spacetime. This chapter will also explain much of the jargon used in the thesis, which otherwise easily obstructs the main message. After a sufficient understanding of string theory has been achieved, we will move on to holography and holographic dualities in the next chapter, focusing on AdS/CFT and actual computations using holography. Matching of theories is discussed to set up a holographic dictionary. After this, we need to choose either a topdown or a bottomup approach, from which we will use the former since we are going to use the WSS model. After this comes a brief review of QCD and its central features to be reproduced in holographic QCD. Immediately following this, we will review the WittenSakaiSugimoto model, which is qualitatively and sometimes also quantitatively a reasonable holographic version of QCD. We will discuss WSS’s successes and room for improvement, especially in places that might affect the analysis that we are about to perform on neutron stars. Finally, after all this theoretical development, we will delve into the world of neutron stars. A quick review of the basic features and astrophysical constraints of neutron stars, along with difficulties in modeling them, is given. After this, we will discuss two models of neutron stars, the first one being a toy model with simplified physics and the other a more realistic one. The basic workflow that is required to get to the equation of state data and other relevant observables from a string theoretic action is given stepbystep, and many recent results using this model are reviewed. In the end, the future of the development of the holographic duality, constructing models with it, and modeling of neutron stars is discussed.
Now showing items 120 of 32