Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by study line "Particle Physics and Cosmology"

Sort by: Order: Results:

  • Stendahl, Alex (2020)
    The Standard model of particle physics has been very successful in describing particles and their interactions. In 2012 the last missing piece, the Higgs boson, was discovered at the Large Hadron Collider. However even for all its success the Standard model fails to explain some phenomena of nature. Two of these unexplained phenomena are dark matter and the metastability of the electroweak vacuum. In this thesis we study one of the simplest extensions of the Standard model; the complex singlet scalar extension. In this framework the CP-even component of the singlet mixes with the Standard model like Higgs boson through the portal operator to form new mass eigenstates. The CP-odd component is a pseudo-Goldstone boson which could be a viable dark matter candidate. We analyse parameter space of the model with respect to constraints from particle physics experiments and cosmological observations. The time evolution of dark matter number density is derived to study the process of dark matter freeze-out. The relic density of the Dark Matter candidate is then calculated with the micrOmegas tool. These calculations are then compared to the measured values of dark matter relic density. Moreover, the electroweak vacuum can be stabilised due the contribution of the singlet scalar to the Standard Model Higgs potential. We derive the β-functions of the couplings in order to study the renormalisation group evolution of the parameters of the model. With the contribution of the portal coupling to the β-function of the Higgs coupling we are able to stabilise the electroweak vacuum up to the Planck scale. The two-loop β-functions are calculated using the SARAH tool.
  • Halkoaho, Johannes (2022)
    The primordial perturbations created by inflation in the early Universe are known to be able to produce significant amount of primordial black holes and gravitational waves with large amplitudes in some inflationary models. Primordial black holes are produced by primordial scalar perturbations and gravitational waves are partly primordial tensor perturbations and partly produced by scalar perturbations. In this thesis we review some of the current literature on the subject and discuss a few inflationary models that are capable of producing primordial scalar perturbations large enough to create a significant amount of primordial black holes. The main focus is on ultra-slow roll inflation with a concrete example potential illustrating the dynamics of the scenario followed by a briefer treatment of some of the alternative models. We start by explaining the necessary background theory for the understanding of the subject at hand. Then we move on to the inflationary models covered in this thesis. After that we explain the production of the primordial black holes and gravitational waves from scalar perturbations. Then we consider primordial black holes as a dark matter candidate and go through the most significant known restrictions on the existence of primordial black holes with different masses. We discuss some of the possible future constraints for the remaining possible mass window for which primordial black holes could explain all of dark matter. We then briefly discuss two planned space-based gravitational wave detectors that may be able to detect gravitational waves created by inflation.
  • Rantanen, Milla-Maarit (2020)
    Semiconductor radiation detectors are devices used to detect electromagnetic and particle radiation. The signal formation is based on the transportation of charges between the valence band and conduction band. The interaction between the detector material and the radiation generates free electrons and holes that move in opposite directions in the electric field applied between the electrodes. The movement of charges induces a current in the external electrical circuit, which can be used for particle identification, measurement of energy or momentum, timing, or tracking. There are several different detector materials and designs and, new options are continuously developed. Diamond is a detector material that has received a great amount of interest in many fields. This is due to its many unique properties. Many of them arise from the diamond crystal structure and the strength of the bond between the carbon atoms. The tight and rigid structure makes diamond a strong and durable material, which allows operation of diamond detectors in harsh radiation environments. This, combined with the fast signal formation and short response time makes diamond detector an excellent choice for high energy physics applications. The diamond structure leads also to a wide band gap. Thanks to the wide band bap, diamond detectors have low leakage current and they can be operated even in high temperatures without protection from surrounding light. Especially electrical properties of semiconductors strongly depend on the concentration of impurities and crystal defects. Determination of electrical properties can therefore be used to study the crystal quality of the material. The electrical properties of the material determine the safe operational region of the device and knowledge of the leakage current and the charge carrier transportation mechanism are required for optimized operation of detectors. Characterization of electrical properties is therefore an important part of semiconductor device fabrication. Electrical characterization should be done at different stages of the fabrication in order to detect problems at an early stage and to get an idea of what could have caused them. This work describes the quality assurance process of single crystal CVD (chemical vapour deposition) diamond detectors for the PPS-detectors for the CMS-experiment. The quality assurance process includes visual inspection of the diamond surfaces and dimensions by optical and cross polarized light microscopy, and electrical characterization by measurement of leakage current and CCE (charge collection efficiency). The CCE measurement setup was improved with a stage controller, which allows automatic measurement of CCE in several positions on the diamond detector. The operation of the new setup and the reproducibility of the results were studied by repeated measurements of a reference diamond. The setup could successfully be used to measure CCE over the whole diamond surface. However, the measurement uncertainty is quite large. Further work is needed to reduce the measurement uncertainty and to determine the correlation between observed defects and the measured electrical properties.
  • Veltheim, Otto (2022)
    The measurement of quantum states has been a widely studied problem ever since the discovery of quantum mechanics. In general, we can only measure a quantum state once as the measurement itself alters the state and, consequently, we lose information about the original state of the system in the process. Furthermore, this single measurement cannot uncover every detail about the system's state and thus, we get only a limited description of the system. However, there are physical processes, e.g., a quantum circuit, which can be expected to create the same state over and over again. This allows us to measure multiple identical copies of the same system in order to gain a fuller characterization of the state. This process of diagnosing a quantum state through measurements is known as quantum state tomography. However, even if we are able to create identical copies of the same system, it is often preferable to keep the number of needed copies as low as possible. In this thesis, we will propose a method of optimising the measurements in this regard. The full description of the state requires determining multiple different observables of the system. These observables can be measured from the same copy of the system only if they commute with each other. As the commutation relation is not transitive, it is often quite complicated to find the best way to match the observables with each other according to these commutation relations. This can be quite handily illustrated with graphs. Moreover, the best way to divide the observables into commuting sets can then be reduced to a well-known graph theoretical problem called graph colouring. Measuring the observables with acceptable accuracy also requires measuring each observable multiple times. This information can also be included in the graph colouring approach by using a generalisation called multicolouring. Our results show that this multicolouring approach can offer significant improvements in the number of needed copies when compared to some other known methods.
  • Toikka, Nico (2023)
    Particle jets are formed in high energy proton-proton collisions and then measured by particle physics experiments. These jets, initiated by the splitting and hadronization of color charged quarks and gluons, serve as important signatures of the strong force and provide a view to size scales smaller than the size of an atom. So, understanding jets, their behaviour and structure, is a path to understanding one of the four fundamental forces in the known universe. But, it is not only the strong force that is of interest. Studies of Standard Model physics and beyond Standard Model physics require a precise measurement of the energies of final state particles, represented often as jets, to understand our existing theories, to search for new physics hidden among our current experiments and to directly probe for the new physics. As experimentally reconstructed objects the measured jets require calibration. At the CMS experiment the jets are calibrated to the particle level jet energy scale and their resolution is determined to achieve the experimental goals of precision and understanding. During the many-step process of calibration, the position, energy and structure of the jets' are taken into account to provide the most accurate calibration possible. It is also of great importance, whether the jet is initiated by a gluon or a quark, as this affects the jets structure, distribution of energy among its constituents and the number of constituents. These differences cause disparities when calibrating the jets. Understanding of jets at the theory level is also important for simulation, which is utilized heavily during calibration and represents our current theoretical understanding of particle physics. This thesis presents a measurement of the relative response between light quark (up, down and strange) and gluon jets from the data of CMS experiment measured during 2018. The relative response is a measure of calibration between the objects and helps to show where the difference of quark and gluon jets is the largest. The discrimination between light quarks and gluons is performed with machine learning tools, and the relative response is compared at multiple stages of reconstruction to see how different effects affect the response. The dijet sample that is used in this study provides a full view of the phase space in pT and |eta|, with analysis covering both quark and gluon dominated regions of the space. These studies can then be continued with similar investigations of other samples, with the possibility of using the combined results as part of the calibration chain.
  • Virta, Maxim (2022)
    Strongly coupled matter called quark–gluon plasma (QGP) is formed in heavy-ion collisions at RHIC [1, 2] and the LHC [3, 4]. The expansion of this matter, caused by pressure gradients, is known to be hydrodynamic expansion. The computations show that the expanding QGP has a small shear viscosity to entropy density ratio (η/s), close to the known lower bound 1/4π [5]. In such a medium one expects that jets passing through the medium would create Mach cones. Many experimental trials have been done but no proper evidence of the Mach cone has been found [6, 7]. Mach cones were thought to cause double-bumps in azimuthal correlations. However these were later shown to be the third flow harmonic. In this thesis a new method is proposed for finding the Mach cone with so called event-engineering. The higher order flow harmonics and their linear response are known to be sensitive to the medium properties [8]. Hence a Mach cone produced by high momentum jet would change the system properties and, thus, the observable yields. Different flow observables are then studied by selecting high energy jet events with different momentum ranges. Considered observables for different momenta are then compared to the all events. Found differences in the flow harmonics and their correlations for different jet momenta are reported showing evidence of Mach cone formation in the heavy-ion collisions. The observations for different jet momenta are then quantified with χ 2 -test to see the sensitivities of different observables to momentum selections.
  • Siilin, Kasper (2022)
    I use hydrodynamic cosmological N-body simulations to study the effect that a secondary period of inflation, driven by a spectator field, would have on the Local Group substructures. Simulations of the Local Group have been widely adopted for studying the nonlinear structure formation on small scales. This is essentially because detailed observations of faint dwarf galaxies are mostly limited to within the Local Group and its immediate surroundings. In particular, the ∼ 100 dwarf galaxies, discovered out to a radius of 3 Mpc from the Sun, constitute a sample that has the potential to discriminate between different cosmological models on small scales, when compared to simulations. The two-period inflaton-curvaton inflation model is one such example, since it gives rise to a small-scale cut-off in the ΛCDM primordial power spectrum, compared to the power spectrum of the ΛCDM model with single field power-law inflation. I investigate the substructures that form in a simulated analogue of the Local Group, with initial conditions that incorporate such a modified power spectrum. The most striking deviation, from the standard power-law inflation, is the reduction of the total number of subhalos, with v_max > 10 km/s, by a factor of ∼ 10 for isolated subhalos and by a factor of ∼ 6 for satellites. However, the reduction is mostly in the number of non-star-forming subhalos, and the studied model thus remains a viable candidate, taking into account the uncertainty in the Local Group total mass estimate. The formation of the first galaxies is also delayed, and the central densities of galaxies with v_max < 50 km/s are lowered: their circular velocities at 1 kpc from the centre are decreased and the radii of maximum circular velocity are increased. As for the stellar mass-metallicity and the stellar mass-halo mass relations, or the selection effects from tidal disruption, I find no significant differences between the models.
  • Lintuluoto, Adelina Eleonora (2021)
    At the Compact Muon Solenoid (CMS) experiment at CERN (European Organization for Nuclear Research), the building blocks of the Universe are investigated by analysing the observed final-state particles resulting from high-energy proton-proton collisions. However, direct detection of final-state quarks and gluons is not possible due to a phenomenon known as colour confinement. Instead, event properties with a close correspondence with their distributions are studied. These event properties are known as jets. Jets are central to particle physics analysis and our understanding of them, and hence of our Universe, is dependent upon our ability to accurately measure their energy. Unfortunately, current detector technology is imprecise, necessitating downstream correction of measurement discrepancies. To achieve this, the CMS experiment employs a sequential multi-step jet calibration process. The process is performed several times per year, and more often during periods of data collection. Automating the jet calibration would increase the efficiency of the CMS experiment. By automating the code execution, the workflow could be performed independently of the analyst. This in turn, would speed up the analysis and reduce analyst workload. In addition, automation facilitates higher levels of reproducibility. In this thesis, a novel method for automating the derivation of jet energy corrections from simulation is presented. To achieve automation, the methodology utilises declarative programming. The analyst is simply required to express what should be executed, and no longer needs to determine how to execute it. To successfully automate the computation of jet energy corrections, it is necessary to capture detailed information concerning both the computational steps and the computational environment. The former is achieved with a computational workflow, and the latter using container technology. This allows a portable and scalable workflow to be achieved, which is easy to maintain and compare to previous runs. The results of this thesis strongly suggest that capturing complex experimental particle physics analyses with declarative workflow languages is both achievable and advantageous. The productivity of the analyst was improved, and reproducibility facilitated. However, the method is not without its challenges. Declarative programming requires the analyst to think differently about the problem at hand. As a result there are some sociological challenges to methodological uptake. However, once the extensive benefits are understood, we anticipate widespread adoption of this approach.
  • Räsänen, Juska (2021)
    Coronal mass ejections (CMEs) are large-scale eruptions of plasma entrained in a magnetic field. They occur in the solar corona, and from there they propagate into interplanetary space along with the solar wind. If a CME travels faster than the surrounding solar wind, a shock wave forms. Shocks driven by CMEs can act as powerful accelerators of charged particles. When charged particles like electrons are accelerated, they emit electromagnetic radiation, especially in the form of radio waves. Much of the radio emission from CMEs comes in the form of solar radio bursts. Traditionally solar radio bursts are classified into five types, called type I–V bursts, based on their characteristics and appearance in a dynamic spectrum. Of these five types of bursts, especially type II radio bursts are believed to be signatures of shock waves in the corona and interplanetary space. There are, however, also radio bursts associated with CMEs and shocks that do not fit the description of any of the five standard types of radio bursts. In this thesis three moving radio bursts associated with a CME that erupted on May 22, 2013 are identified and studied in detail. The characteristics of the bursts do not match those of the usual five types of solar radio bursts. The aim of the work is to ascertain the emission mechanism that causes the observed radio bursts, as well as locate the sites of electron acceleration that are the sources of the emission. The kinematics and the spectral features of the emission are studied in order to find answers to these questions. Analysis of the spectral features of the moving bursts showed that the bursts were emitted via plasma emission. Analysis of the kinematics revealed that the moving radio bursts originated unusually high up in the corona from the northern flank of the CME. The CME studied in this work was preceded by another one which erupted some hours earlier, and the disturbed coronal environment likely caused the radio emission to be emitted from an unusual height. It was found that the bursts likely originated from electrons accelerated at the shock driven by the CME.
  • Bieleviciute, Auguste (2023)
    The high luminosity upgrade of the Large Hadron Collider (LHC) will result in higher collision rates and current equipment is not up to par with this future era of operations. Identification and reconstruction of hard interactions may be hampered by the spatial overlapping of particle tracks and energy deposits from additional collisions and this often leads to false triggers. In addition, current particle detectors suffer from radiation damage that severely affects the accuracy of our results. The new minimum ionizing particle (MIP) timing detector will be equipped with low gain avalanche detectors which have a comparably small timing resolution that helps with the track reconstruction and their thin design limits the radiation damage over time. In this thesis, we build an experimental set-up in order to study the timing resolution of these detectors closely. In order to find the timing resolution, we take the time difference between the signals from two detectors and put it in a histogram to which we apply a Gaussian fit. The standard deviation of this Gaussian is called the time spread from which we can estimate the timing resolution. We first build, characterize and improve our experimental set-up using reference samples with known timing resolution until our set-up is able to reproduce the reference value. Then we repeat the measurements with irradiated samples in order to study how radiation damage impacts timing. We were able to adjust our setup with reference samples until we measured a timing resolution of 33$\pm$2~ps. We use this result to calculate the timing resolution of an irradiated sample ($8.0 \times 10^{14}$ proton fluence) and we found a timing resolution of 62$\pm$2~ps. This thesis also discusses the analysis of the data and how the data can be re-analyzed to try to improve the final result.
  • Edwards, Ethan (2024)
    Cosmological first-order phase transitions (FOPTs) are a hypothetical scenario occurring in the early universe in which bubbles nucleate and expand, generating gravitational waves (GWs). These transitions interest scientists due to their occurrence in extensions to the Standard Model of particle physics, their potential for providing insight into open questions in particle physics and cosmology, and the possibility of observing their signature with the planned Laser Interferometer Space Antenna (LISA). Modeling GW production from FOPTs is thus a topic of active research. In FOPT models, GW production is split into three sources: collisions between bubble walls Ωenv, overlapping fluid shells Ωsw, and fluid turbulence Ωturb. When modeling the contribution from Ωsw in 1D spherical simulations, a sound shell model is often employed which assumes that fluid shells reach a calculable self-similar state of expansion before overlapping. In this thesis, I determine when this asymptotic expansion state is reached by defining and calculating a relaxation time ts and transition rate βs for 1D expanding fluid shells. I model two scenarios, a thin and a thick-walled perturbed nucleation bubble expanding in a relativistic fluid, in the limit of fast detonations and weak coupling. In each case, respectively, relaxation temperature and transition rate are determined to be: tsTc = 7.422(21) × 103, βs/Tc = 1.3474(38) × 10−4; and tsTc = 9.901(33) × 105, βs/Tc = 1.011(35) × 10−5. When fixing the critical temperature Tc below which bubbles can nucleate, these results predict that when the transition rate β > βs, the GW spectrum produced assuming relaxed fluid shells may be inaccurate. In addition to this main result, I also compare various methods for estimating bubble wall expansion velocity. These results are useful for 3D simulations, in which direct methods for determining wall velocity are unwieldy.