Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by master's degree program "Magisterprogrammet i teoretiska och beräkningsmetoder"

Sort by: Order: Results:

  • Hatakka, Ilari (2023)
    In quantum field theory the objects of interest are the n-point vacuum expectations which can be calculated from the path integral. The path integral usually used in physics is not well-defined and the main motivation for this thesis is to give axioms that a well-defined path integral candidate has to at least satisfy for it to be physically relevant - we want the path integral to have properties which allow us to reconstruct the physically interesting objects from it. The axioms given in this thesis are called the Osterwalder-Schrader axioms and the reconstruction of the physical objects from the path integral satisfying the axioms is called the Osterwalder-Schrader reconstruction. The Osterwalder-Schrader axioms are special in the sense that they are stated in terms of the Euclidean spacetime instead of the physically relevant Minkowski spacetime. As the physical objects live in Minkowski spacetime this means that when reconstructing the physically relevant objects we have to go back to Minkowski spacetime at some point. This thesis has three parts (and an introduction). In the first part we give a brief introduction to parts of functional analysis which we will need later - theory about distributions and about generators of families of operators. The second part is about the Osterwalder-Schrader axioms and about the reconstruction of the physically relevant objects from the path integral. In the last part we check that the path integral for the free field of mass m satisfies the Osterwalder-Schrader axioms.
  • Kejzar, Nejc (2020)
    AMPA receptors (AMPARs) are the most numerous synaptic receptors in the hippocampus. Here they take one of the central roles in the expression of long-term potentiation (LTP), the molecular mechanism underlying learning and memory. They belong to the group of glutamate-gated ion channels and have a structure characterized by 4 discrete domains. While the functional roles of C-terminal (CTD), transmembrane (TMD) and ligand-binding (LBD) domains have largely been established, the regulatory capacity - if any - of the N-terminal domain (NTD) remains questionable. In this thesis we used molecular dynamics (MD) simulations to show directly for the first time that AMPA receptor NTD domain can respond to the pH of the surrounding medium. Specifically, we identified a pair of histidine residues in the NTD interface, which are capable of acting as pH sensors - upon acidification of the environment the two histidines become protonated and through electrostatic repulsion destabilize the NTD interface. If experimentally validated, this model could provide a mechanistic explanation of AMPAR clustering in synapses. Due to low affinity for glutamate under physiological conditions, it has been proposed that AMPARs form clusters right underneath glutamate release sites in order to produce sufficiently large postsynaptic depolarizations. Since the lumen of glutamate vesicles is acidic, the presynaptic glutamate release is coupled to transient acidification of the synaptic environment. In our model this acidification is detected by identified interface histidines, which upon protonation cause structural rearrangement of the NTD interface. This rearrangement could lead to formation of interactions either with other AMPARs or synaptic anchor proteins (such as PSD-95), resulting in AMPAR clustering underneath glutamate release sites.
  • Takko, Heli (2021)
    Quantum entanglement is one of the biggest mysteries in physics. In gauge field theories, the amount of entanglement can be measured with certain quantities. For an entangled system, there are correlations with these measured quantities in both time and spatial coordinates that do not fit into the understanding we currently hold about the locality of the measures and correlations. Difficulties in obtaining probes for entanglement in gauge theories arise from the problem of nonlocality. It can be stated as the problem of decomposing the space of the physical states into different regions. In this thesis, we focus on a particular supersymmetric Yang-Mills theory that is holographically dual to a classical gravity theory in an asymptotically anti de Sitter spacetime. We introduce the most important holographic probes of entanglement and discuss the inequalities obtained from the dual formulation of the entanglement entropy. We introduce the subregion duality as an interesting conjecture of holography that remains under research. The understanding of the subregion duality is not necessarily solid in arbitrary geometries, as new results that suggest either a violation of the subregion duality or act against our common knowledge of the holography by reconstructing the bulk metric beyond the entanglement wedge. This thesis will investigate this aspect of subregion duality by evaluating the bulk probes such as Wilson loop for two different geometries (deconfining and confining). We aim to find whether or not these probes remain inside of the entanglement wedge. We find that, for both geometries in four dimensions, the subregion duality is not violated. In other words, the reduced CFT state does not encode information about the bulk beyond the entanglement wedge. However, we can not assume this is the case with arbitrary geometries and therefore, this topic will remain under our interest for future research.
  • Tyree, Juniper (2023)
    Response Surface Models (RSM) are cheap, reduced complexity, and, usually, statistical models that are fit to the response of more complex models to approximate their outputs with higher computational efficiency. In atmospheric science, there has been a continuous push to reduce the amount of training data required to fit an RSM. With this reduction in costly data gathering, RSMs can be used more ad hoc and quickly adapted to new applications. However, with the decrease in diverse training data, the risk increases that the RSM is eventually used on inputs on which it cannot make a prediction. If there is no indication from the model that its outputs can no longer be trusted, trust in an entire RSM decreases. We present a framework for building prudent RSMs that always output predictions with confidence and uncertainty estimates. We show how confidence and uncertainty can be propagated through downstream analysis such that even predictions on inputs outside the training domain or in areas of high variance can be integrated. Specifically, we introduce the Icarus RSM architecture, which combines an out-of-distribution detector, a prediction model, and an uncertainty quantifier. Icarus-produced predictions and their uncertainties are conditioned on the confidence that the inputs come from the same distribution that the RSM was trained on. We put particular focus on exploring out-of-distribution detection, for which we conduct a broad literature review, design an intuitive evaluation procedure with three easily-visualisable toy examples, and suggest two methodological improvements. We also explore and evaluate popular prediction models and uncertainty quantifiers. We use the one-dimensional atmospheric chemistry transport model SOSAA as an example of a complex model for this thesis. We produce a dataset of model inputs and outputs from simulations of the atmospheric conditions along air parcel trajectories that arrived at the SMEAR II measurement station in Hyytiälä, Finland, in May 2018. We evaluate several prediction models and uncertainty quantification methods on this dataset and construct a proof-of-concept SOSAA RSM using the Icarus RSM architecture. The SOSAA RSM is built on pairwise-difference regression using random forests and an auto-associative out-of-distribution detector with a confidence scorer, which is trained with both the original training inputs and new synthetic out-of-distribution samples. We also design a graphical user interface to configure the SOSAA model and trial the SOSAA RSM. We provide recommendations for out-of-distribution detection, prediction models, and uncertainty quantification based on our exploration of these three systems. We also stress-test the proof-of-concept SOSAA RSM implementation to reveal its limitations for predicting model perturbation outputs and show directions for valuable future research. Finally, our experiments affirm the importance of reporting predictions alongside well-calibrated confidence scores and uncertainty levels so that the predictions can be used with confidence and certainty in scientific research applications.
  • Enckell, Anastasia (2023)
    Numerical techniques have become powerful tools for studying quantum systems. Eventually, quantum computers may enable novel ways to perform numerical simulations and conquer problems that arise in classical simulations of highly entangled matter. Simple one dimensional systems of low entanglement are efficiently simulatable on a classical computer using tensor networks. This kind of toy simulations also give us the opportunity to study the methods of quantum simulations, such as different transformation techniques and optimization algorithms that could be beneficial for the near term quantum technologies. In this thesis, we study a theoretical framework for a fermionic quantum simulation and simulate the real-time evolution of particles governed by the Gross-Neveu model in one-dimension. To simulate the Gross-Neveu model classically, we use the Matrix Product State (MPS) method. Starting from the continuum case, we discretise the model by putting it on a lattice and encode the time evolution operator with the help of fermion-to-qubit transformations, Jordan-Wigner and Bravyi-Kitaev. The simulation results are visualised as plots of probability density. The results indicate the expected flavour and spatial symmetry of the system. The comparison of the two transformations show better performance of the Jordan-Wigner transformation before and after the gate reduction.
  • Hernandez Serrano, Ainhoa (2023)
    Using quantum algorithms to carry out ML tasks is what is known as Quantum Machine Learning (QML) and the methods developed within this field have the potential to outperform their classical counterparts in solving certain learning problems. The development of the field is partly dependent on that of a functional quantum random access memory (QRAM), called for by some of the algorithms devised. Such a device would store data in a superposition and could then be queried when algorithms require it, similarly to its classical counterpart, allowing for efficient data access. Taking an axiomatic approach to QRAM, this thesis provides the main considerations, assumptions and results regarding QRAM and yields a QRAM handbook and comprehensive introduction to the literature pertaining to it.
  • Suominen, Heikki (2022)
    Quantum computers are one of the most prominent emerging technologies of the 21st century. While several practical implementations of the qubit—the elemental unit of information in quantum computers—exist, the family of superconducting qubits remains one of the most promising platforms for scaled-up quantum computers. Lately, as the limiting factor of non-error-corrected quantum computers has began to shift from the number of qubits to gate fidelity, efficient control and readout parameter optimization has become a field of significant scientific interest. Since these procedures are multibranched and difficult to automate, a great deal of effort has gone into developing associated software, and even technologies such as machine learning are making an appearance in modern programs. In this thesis, we offer an extensive theoretical backround on superconducting transmon qubits, starting from the classical models of electronic circuits, and moving towards circuit quantum electrodynamics. We consider how the qubit is controlled, how its state is read out, and how the information contained in it can become corrupted by noise. We review theoretical models for characteristic parameters such as decoherence times, and see how control pulse parameters such as amplitude and rise time affect gate fidelity. We also discuss the procedure for experimentally obtaining characteristic qubit parameters, and the optimized randomized benchmarking for immediate tune-up (ORBIT) protocol for control pulse optimization, both in theory and alongside novel experimental results. The experiments are carried out with refactored characterization software and novel ORBIT software, using the premises and resources of the Quantum Computing and Devices (QCD) group at Aalto University. The refactoring project, together with the software used for the ORBIT protocol, aims to provide the QCD group with efficient and streamlined methods for finding characteristic qubit parameters and high-fidelity control pulses. In the last parts of the thesis, we evaluate the success and shortcomings of the introduced projects, and discuss future perspectives for the software.
  • Autio, Antti (2020)
    Hiukkasfysiikan standardimalli kuvaa alkeishiukkasia ja niiden välisiä vuorovaikutuksia. Higgsin bosonin löydön (2012) jälkeen kaikki standardimallin ennustamat hiukkaset on havaittu. Standardimalli on hyvin tarkka teoria, mutta kaikkia havaittuja asioita ei voida kuitenkaan selittää standardimallin puitteissa. Supersymmetria on yksi houkutteleva tapa laajentaa standardimallia. Matalan energian supersymmetriaa ei kuitenkaan ole havaittu. Supersymmetria vaatii toimiakseen niin sanotun kahden Higgsin dubletin mallin. Tavallisessa standardimallissa on yksi Higgsin dublettikenttä. Higgsin dubletissa on kaksi kompleksista kenttää eli yhteensä neljä vapausastetta, joten voisi olettaa, että siitä syntyy neljä hiukkasta. Kolme vapausasteista kuitenkin sitoutuu välibosoneihin W+, W− ja Z, jolloin jäljelle jää yksi Higgsin bosoni. Kahden Higgsin dubletin malleissa dublettikenttiä on kaksi. Koska se lisää teoriaan yhden neljän vapausasteen dubletin, Higgsin hiukkasia on siinä kaiken kaikkiaan viisi: kolme sähköisesti neutraalia (h, H ja A) sekä kaksi sähköisesti varattua (H+ ja H−). Tässä työssä keskitytään varattujen Higgsin hiukkasten etsintään malliriippumattomasti. Tutkimuksessa käytetään LHC-kiihdyttimen (Large Hadron Collider, suuri hadronitörmäytin) CMS-ilmaisimen (Compact Muon Solenoid, kompakti myonisolenoidi) keräämää dataa. Sähkövarauksellisten Higgsin bosonien etsintä keskittyy lopputiloihin, joissa varattu Higgsin bosoni hajoaa hadroniseksi tau-leptoniksi (eli tau-leptoniksi, joka puolestaan hajoaa hadroneiksi) sekä taun neutriinoksi. Niin sanottu liipaisu on tapa suodattaa dataa tallennusvaiheessa, sillä dataa tulee törmäyksistä niin paljon, ettei kaiken tallentaminen ole mahdollista. Eri liipaisimet hyväksyvät törmäystapauksia eri kriteerien perusteella. Liipaisusta aiheutuu merkittäviä systemaattisia epävarmuuksia. Tässä työssä liipaisun epävarmuuksia pyritään pienentämään käyttämällä sellaisia liipaisimia, joiden epävarmuudet ovat pienempiä. Tätä varten analyysi on jaettava riippumattomiin osiin, joiden epävarmuudet käsitellään erikseen. Lopuksi osat yhdistetään tilastollisesti toisiinsa, jolloin kokonaisepävarmuuden oletetaan pienenevän. Tässä työssä tutkitaan, pieneneekö tämä epävarmuus ja kuinka paljon. Näitä menetelmiä käyttäen kykenimme löytämään pieniä parannuksia analyysin tarkkuuteen raskaiden varattujen Higgsin bosonien kohdalla. Lisäksi odotettu raja, jota suurempi varatun Higgsin hiukkasen tuotto tässä lopputilassa olisi havaittavissa, paranee yllättävästi. Tätä rajan paranemista tutkitaan liipaisua emuloimalla. Työ on tarkoitus sisällyttää koko Run2:n datasta julkaistaviin tuloksiin.
  • Toikka, Nico (2023)
    Particle jets are formed in high energy proton-proton collisions and then measured by particle physics experiments. These jets, initiated by the splitting and hadronization of color charged quarks and gluons, serve as important signatures of the strong force and provide a view to size scales smaller than the size of an atom. So, understanding jets, their behaviour and structure, is a path to understanding one of the four fundamental forces in the known universe. But, it is not only the strong force that is of interest. Studies of Standard Model physics and beyond Standard Model physics require a precise measurement of the energies of final state particles, represented often as jets, to understand our existing theories, to search for new physics hidden among our current experiments and to directly probe for the new physics. As experimentally reconstructed objects the measured jets require calibration. At the CMS experiment the jets are calibrated to the particle level jet energy scale and their resolution is determined to achieve the experimental goals of precision and understanding. During the many-step process of calibration, the position, energy and structure of the jets' are taken into account to provide the most accurate calibration possible. It is also of great importance, whether the jet is initiated by a gluon or a quark, as this affects the jets structure, distribution of energy among its constituents and the number of constituents. These differences cause disparities when calibrating the jets. Understanding of jets at the theory level is also important for simulation, which is utilized heavily during calibration and represents our current theoretical understanding of particle physics. This thesis presents a measurement of the relative response between light quark (up, down and strange) and gluon jets from the data of CMS experiment measured during 2018. The relative response is a measure of calibration between the objects and helps to show where the difference of quark and gluon jets is the largest. The discrimination between light quarks and gluons is performed with machine learning tools, and the relative response is compared at multiple stages of reconstruction to see how different effects affect the response. The dijet sample that is used in this study provides a full view of the phase space in pT and |eta|, with analysis covering both quark and gluon dominated regions of the space. These studies can then be continued with similar investigations of other samples, with the possibility of using the combined results as part of the calibration chain.
  • Paloranta, Matias Mikko Aleksi (2023)
    Low-frequency $1/$ noise is ubiquitous, found in all electronic devices and other diverse areas such as as music, economics and biological systems. Despite valiant efforts, the source of $1/f$ noise remains one of the oldest unsolved mysteries in modern physics after nearly 100 years since its initial discovery in 1925. In metallic conductors resistance $1/f$ noise is commonly attributed to diffusion of mobile defects that alter the scattering cross section experienced by the charge carriers. Models based on two-level tunneling systems (TLTS) are typically employed. However, a model based on the dynamics of mobile defects forming temporary clusters would naturally offer long-term correlations required by $1/f$ noise via the nearly limitless number of configurations among a group of defects. Resistance $1/f$ noise due to such motion of mobile defects was studied via Monte Carlo simulations of a simple resistor network resembling an atomic lattice. The defects migrate through the lattice via thermally activated hopping motion, causing fluctuations in the resistance due to varying scattering cross section. The power spectral density (PSD) $S(f)$ of the simulated resistance noise was then calculated and first compared to $S(f)=C/f^\alpha$ noise, where $C$ is a constant and $\alpha$ is ideally close to unity. The value of $\alpha$ was estimated via a linear fit of the noise PSD on a log-log scale. The resistor network was simulated with varying values of temperature, system size and the concentration of defects. The noise produced by the simulations did not yield pure $1/f^\alpha$ noise, instead the lowest frequencies displayed a white noise tail, changing to $1/f^\alpha$ noise between $10^{-4}$ to $10^{-2}$~Hz. In this way the spectrum of the simulated noise resembles a Lorentzian. The value of $\alpha$ was found to be the most sensitive to temperature $T$, which directly affects the motion of the defects. At high $T$ the value of $\alpha$ was closer to 1, whereas at low $T$ it was closer to $1,5$. Varying the size of the system was found to have little impact on $\alpha$ when the temperature and concentration of defects were kept fixed. Increasing the number of defects was found to have slightly more effect on $\alpha$ when the temperature and system size were kept fixed. The value of $\alpha$ was closer to unity when the concentration of defects was higher, but the effect was not nearly as pronounced compared to varying the temperature. In addition, the simulated noise was compared to a PSD of the form $S(f)\propto e^{-\sqrt{N}/T}1/f$, where $N$ is the size of the system, according to recent theoretical proceedings. The $1/f^\alpha$ part of the simulated noise was found to roughly follow the above equation, but the results remain inconclusive. Although the simple toy model did not produce pure $1/f^\alpha$ noise, the dynamics of the mobile defects do seem to have an effect on the noise PSD, yielding noise closer to $1/f$ when there are more interactions between the defects due to either higher mobility or higher concentration of defects. However, this is disregarding the white noise tail. Recent experimental research on high quality graphene employing more rigorous kinetic Monte Carlo simulations have displayed more promising results. This indicates that the dynamics of temporary cluster formation of mobile defects is relevant to understand $1/f$ noise in metallic conductors, offering an objective for future work.
  • Heinonen, Arvo Arnoldas (2021)
    The goal of this work is to describe sheaves as an alternative to fiber bundles in geometric prequantization. We briefly go over geometric quantization of Euclidean space and make a connection with canonical quantization. After this, we look at the connections between covers of a topological space, Grothendieck topologies, and systems of local epimorphisms. Finally, we use these concepts to define sheaves and show how they can be used in prequantization in place of the more traditional fiber bundles to ensure the consistency of locally defined quantities.
  • Kormu, Anna (2020)
    First order electroweak phase transitions (EWPTs) are an attractive area of research. This is mainly due to two reasons. First, they contain aspects that could help to explain the observed baryon asymmetry. Secondly, strong first order PTs could produce gravitational waves (GWs) that could be detectable by the Laser Interferometer Space Antenna (LISA), a future space-based GW detector. However, the electroweak PT in the Standard Model (SM) is not a first order transition but a crossover. In so-called beyond the SM theories the first order transitions are possible. To investigate the possibility of an EWPT and the detection by LISA, we must be able to parametrise the nature of the PT accurately. We are interested in the calculation of the bubble nucleation rate because it can be used to estimate the properties of the possible GW signal, such as the duration of the PT. The nucleation rate essentially quantifies how likely it is for a point in space to tunnel from one phase to the other. The calculation can be done either using perturbation theory or simulations. Perturbative approaches however suffer from the so-called infrared problem and are not free of theoretical uncertainty. We need to perform a nonperturbative calculation so that we can determine the nucleation rate accurately and test the results of perturbation theory. In this thesis, we will explain the steps that go into a nonperturbative calculation of the bubble nucleation rate. We perform the calculation on the cubic anisotropy model, a theory with two scalar fields. This toy model is one of the simplest in which a radiatively induced transition occurs. We present preliminary results on the nucleation rate and compare it with the thin-wall approximation.
  • Poltto, Lotta (2024)
    Despite the continuous efforts to unveil the true nature of Dark Matter (DM), it still remains as a mystery. In this thesis we propose one model that can produce the correct relic abundance of DM in the current Universe, while fitting into the existing experimentally obtained constraints. In this model we add a singlet fermion, which is a not completely sterile right-handed neutrino, and a heavy real scalar singlet into the Standard Model of Particle Physics (SM) and carry out the relic density calculations. The DM candidate here is the singlet fermion, which acts as a thermally decoupled Weakly Interacting Massive Particle. Theoretical framework is laid out in detail. Special attention is given to obtaining the definition for relic abundance from Lee-Weinberg equation in terms of yield, and the decoupling temperature. It is found, that the usual way of handling the thermally averaged cross section appearing in these definitions is not suitable in this case. In fact, the usual approximations can only be done when thermally averaged cross section is almost linear in $s$, and this is a demand that very few models can satisfy. The correct way to treat the cross section by taking the expansion in terms of the relative velocity is presented with careful attention to detail. This methodology is then applied to the extension of the SM we introduced. Only tree-level processes are being considered. Cross sections are calculated for each possible process to obtain the total cross section needed for the DM relic density calculations. We present how the different free parameters in the theory affect the relic abundance and what masses are allowed for the right-handed neutrino to obtain. It is found out that the parameters in this model are heavily constrained. Yet the model is able to fit into the constraints obtained from branching ratio and direct detection (DD) experiments, while producing the correct relic density. This is true when the mixing angle $\theta$ is of the order $1 \times 10^{-4}$, and right-handed neutrino has the mass of exactly half of the mass of the heavy scalar or higher than the mass of the heavy scalar. It is proposed that allowing lepton mixing and adding a separate mass term for fermion in the model could make the model less restricted. Investigating this would be interesting thing to do in the future. However, the proposed DM candidate remains viable and the upcoming DD experiments will relatively soon reveal if the singlet fermion is the DM particle we are seeking.
  • Ruosteoja, Tomi (2024)
    Astronomical observations suggest that the cores of neutron stars may have high enough baryon densities to contain quark matter (QM). The unavailability of lattice field theory makes perturbative computations important for understanding the thermodynamics of quantum chromodynamics at high baryon densities. The gluon self-energy contributes to many transport coefficients in QM, such as thermal conductivity and shear viscosity, through quark-quark scattering processes. The contribution of short-wavelength virtual gluons to the self-energy of soft, long-wavelength gluons, known as the hard-thermal-loop (HTL) self-energy, was recently computed at next-to-leading order (NLO) in perturbation theory. In this thesis, we complete the evaluation of the NLO self-energy of soft gluons in cold QM by computing the contribution from soft virtual gluons, known as the one-loop HTL-resummed self- energy. We use HTL effective theory, a reorganization of the perturbative series for soft gluons, within the imaginary-time formulation of thermal field theory. We present the result in terms of elementary functions and finite integrals that can be evaluated numerically. We show explicitly that the NLO self-energy is finite and gauge dependent. The NLO self-energy could be used to compute corrections to transport coefficients in cold QM, which may be relevant for neutron-star applications.
  • Lintuluoto, Adelina Eleonora (2021)
    At the Compact Muon Solenoid (CMS) experiment at CERN (European Organization for Nuclear Research), the building blocks of the Universe are investigated by analysing the observed final-state particles resulting from high-energy proton-proton collisions. However, direct detection of final-state quarks and gluons is not possible due to a phenomenon known as colour confinement. Instead, event properties with a close correspondence with their distributions are studied. These event properties are known as jets. Jets are central to particle physics analysis and our understanding of them, and hence of our Universe, is dependent upon our ability to accurately measure their energy. Unfortunately, current detector technology is imprecise, necessitating downstream correction of measurement discrepancies. To achieve this, the CMS experiment employs a sequential multi-step jet calibration process. The process is performed several times per year, and more often during periods of data collection. Automating the jet calibration would increase the efficiency of the CMS experiment. By automating the code execution, the workflow could be performed independently of the analyst. This in turn, would speed up the analysis and reduce analyst workload. In addition, automation facilitates higher levels of reproducibility. In this thesis, a novel method for automating the derivation of jet energy corrections from simulation is presented. To achieve automation, the methodology utilises declarative programming. The analyst is simply required to express what should be executed, and no longer needs to determine how to execute it. To successfully automate the computation of jet energy corrections, it is necessary to capture detailed information concerning both the computational steps and the computational environment. The former is achieved with a computational workflow, and the latter using container technology. This allows a portable and scalable workflow to be achieved, which is easy to maintain and compare to previous runs. The results of this thesis strongly suggest that capturing complex experimental particle physics analyses with declarative workflow languages is both achievable and advantageous. The productivity of the analyst was improved, and reproducibility facilitated. However, the method is not without its challenges. Declarative programming requires the analyst to think differently about the problem at hand. As a result there are some sociological challenges to methodological uptake. However, once the extensive benefits are understood, we anticipate widespread adoption of this approach.
  • Lindblad, Victor (2022)
    This thesis is aimed to explore the topic of surface diffusion on copper and iron surfaces, using an accelerated molecular dynamics (MD) method known as collective variable-driven hyperdynamics (CVHD). The thesis is divided into six main sections: Introduction, Theory, Methods, Simulations, Results and Conclusion. The introduction briefly explains the main interest behind the topic and why diffusion is a difficult subject for classical MD simulations. In the theory section, the physical description of diffusion in metals is explained, as well as the important quantities that can be determined from these types of simulations. The following section dives into the basics concerning the molecular dynamics simulations method. It also gives a description of the theoretical basis of collective variable-driven hyperdynamics and how it is implemented alongside molecular dynamics. The simulations section more technically explains the system building methodology, discusses key parameters and gives reasoning for the chosen values of these parameters. Since, both copper and iron systems have been simulated, both sets of systems are explained independently. The results section displays the results for the copper and iron systems separately. In both sets of systems, the obtained activation energy of the dominant diffusion mechanisms remain the main point of focus. Lastly, the results are dissected and summarized.
  • Vaaranta, Antti (2022)
    One of the main ways of physically realizing quantum bits for the purposes of quantum technology is to manufacture them as superconducting circuits. These qubits are artificially built two-level systems that act as carriers of quantum information. They come in a variety of types but one of the most common in use is the transmon qubit. The transmon is a more stable, improved version of the earlier types of superconducting qubits with longer coherence times. The qubit cannot function properly on its own, as it needs other circuit elements around it for control and readout of its state. Thus the qubit is only a small part of a larger superconducting circuit interacting with the qubit. Understanding this interaction, where it comes from and how it can be modified to our liking, allows researchers to design better quantum circuits and to improve the existing ones. Understanding how the noise, travelling through the qubit drive lines to the chip, affects the time evolution of the qubit is especially important. Reducing the amount of noise leads to longer coherence times but it is also possible to engineer the noise to our advantage to uncover novel ways of quantum control. In this thesis the effects of a variable temperature noise source on the qubit drive line is studied. A theoretical model describing the time evolution of the quantum state is built. The model starts from the basic elements of the quantum circuit and leads to a master equation describing the qubit dynamics. This allows us to understand how the different choices made in the manufacturing process of the quantum circuit affect the time evolution. As a proof of concept, the model is solved numerically using QuTiP in the specific case of a fixed-frequency, dispersive transmon qubit. The solution shows a decohering qubit with no dissipation. The model is also solved in a temperature range 0K < T ≤ 1K to show how the decoherence times behave with respect to the temperature of the noise source.
  • Rosenberg, Otto (2023)
    Bayesian networks (BN) are models that map the mutual dependencies and independencies between a set of variables. The structure of the model can be represented as a directed acyclic graph (DAG), which is a graph where the nodes represent variables and the directed edges between variables represent a dependency. BNs can be either constructed by using knowledge of the system or derived computationally from observational data. Traditionally, BN structure discovery from observational data has been done through heuristic algorithms, but advances in deep learning have made it possible to train neural networks for this task in a supervised manner. This thesis provides an overview of BN structure discovery and discusses the strengths and weaknesses of the emerging supervised paradigm. One supervised method, the EQ-model, that uses neural networks for structure discovery using equivariant models, is also explored in further detail with empirical tests. Through a process of hyperparameter optimisation and moving to online training, the performance of the EQ-model is increased. The EQ-model is still observed to underperform in comparison to a competing score-based model, NOTEARS, but offers convenient features, such as dramatically faster runtime, that compensate for the reduced performance. Several interesting lines of further study that could be used to further improve the performance of the EQ-model are also identified.
  • Tuokkola, Mikko (2024)
    A quantum computer is a new kind of computer which utilizes quantum phenomena in computing. This machine has the potential to solve specific tasks faster than the most powerful supercomputers and therefore has potential real-life applications across various sectors of society. One promising approach to realize a quantum computer is to store information in superconducting qubits, which are artificial two-level quantum systems made from superconducting electrical circuits. Extremely precise control of these qubits is essential but also challenging due to the excitations out of the two lowest energy states of the quantum system that constitute the computational subspace. In this thesis, we propose a new way to control a superconducting multimode qubit using the unimon qubit as an example. By coupling differently to the different modes of the multimode qubit circuit, we cancel the transition from the first excited state to the second excited state, which is typically the main transition causing a leakage out of the computational subspace. We present a theoretical description of this model by utilizing methods of circuit quantum electrodynamics to compute the energy spectrum and the transition matrix elements of the qubit. By using these results, we simulate the dynamics of the driven unimon qubit undergoing as a single-qubit gate. The result of the simulation shows that this method decreases the leakage relative to the conventional method of driving a qubit, where only one external drive is applied. However, by improving the conventional method with a more advanced pulse optimization method, the leakage becomes smaller than in the standard case of two drive fields. In addition, we find that the practical implementation of our method may be sensitive to variations in the qubit parameters. Therefore, the practical implementation of the method needs further research in the future. By cancelling one energy-level transition of the qubit, we find that other transitions in modes of similar frequency were strongly suppressed. Therefore, this method might be potentially utilized in other qubit operations than the quantum gates, such as in the qubit resetting process, where driving to higher frequency modes of the unimon is preferred.
  • Kärkkäinen, Aapeli (2023)
    One of the main questions in nuclear astrophysics is whether deconfined quark matter exists inside neutron stars. In order to answer this, the equation of state (EoS) of cold and dense quark matter, which plays an essential role in finding the equation of state of strongly interacting matter (QCD matter) inside neutron stars, needs to be determined as accurately as possible [14, 25]. The equation of state, or the pressure, of cold and dense quark matter was evaluated to the full three-loop order in perturbation theory back in 1977 by Freedman and McLerran [9, 10] and recently the contributions of the soft momentum scale to the four-loop pressure were evaluated in [13, 14, 15]. What is missing from the full four-loop pressure is the contribution of the hard momentum scale μ. In this thesis we shall first evaluate the known result for one three-loop Feynman diagram contributing to the three-loop pressure. After this, we derive a new result for a fermionic four-loop master integral at zero temperature and finite quark chemical potentials, which directly contributes to the yet unknown hard sector of the four-loop pressure of cold and dense quark matter.