Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by study line "Particle Physics and Cosmology"

Sort by: Order: Results:

  • Veltheim, Otto (2022)
    The measurement of quantum states has been a widely studied problem ever since the discovery of quantum mechanics. In general, we can only measure a quantum state once as the measurement itself alters the state and, consequently, we lose information about the original state of the system in the process. Furthermore, this single measurement cannot uncover every detail about the system's state and thus, we get only a limited description of the system. However, there are physical processes, e.g., a quantum circuit, which can be expected to create the same state over and over again. This allows us to measure multiple identical copies of the same system in order to gain a fuller characterization of the state. This process of diagnosing a quantum state through measurements is known as quantum state tomography. However, even if we are able to create identical copies of the same system, it is often preferable to keep the number of needed copies as low as possible. In this thesis, we will propose a method of optimising the measurements in this regard. The full description of the state requires determining multiple different observables of the system. These observables can be measured from the same copy of the system only if they commute with each other. As the commutation relation is not transitive, it is often quite complicated to find the best way to match the observables with each other according to these commutation relations. This can be quite handily illustrated with graphs. Moreover, the best way to divide the observables into commuting sets can then be reduced to a well-known graph theoretical problem called graph colouring. Measuring the observables with acceptable accuracy also requires measuring each observable multiple times. This information can also be included in the graph colouring approach by using a generalisation called multicolouring. Our results show that this multicolouring approach can offer significant improvements in the number of needed copies when compared to some other known methods.
  • Toikka, Nico (2023)
    Particle jets are formed in high energy proton-proton collisions and then measured by particle physics experiments. These jets, initiated by the splitting and hadronization of color charged quarks and gluons, serve as important signatures of the strong force and provide a view to size scales smaller than the size of an atom. So, understanding jets, their behaviour and structure, is a path to understanding one of the four fundamental forces in the known universe. But, it is not only the strong force that is of interest. Studies of Standard Model physics and beyond Standard Model physics require a precise measurement of the energies of final state particles, represented often as jets, to understand our existing theories, to search for new physics hidden among our current experiments and to directly probe for the new physics. As experimentally reconstructed objects the measured jets require calibration. At the CMS experiment the jets are calibrated to the particle level jet energy scale and their resolution is determined to achieve the experimental goals of precision and understanding. During the many-step process of calibration, the position, energy and structure of the jets' are taken into account to provide the most accurate calibration possible. It is also of great importance, whether the jet is initiated by a gluon or a quark, as this affects the jets structure, distribution of energy among its constituents and the number of constituents. These differences cause disparities when calibrating the jets. Understanding of jets at the theory level is also important for simulation, which is utilized heavily during calibration and represents our current theoretical understanding of particle physics. This thesis presents a measurement of the relative response between light quark (up, down and strange) and gluon jets from the data of CMS experiment measured during 2018. The relative response is a measure of calibration between the objects and helps to show where the difference of quark and gluon jets is the largest. The discrimination between light quarks and gluons is performed with machine learning tools, and the relative response is compared at multiple stages of reconstruction to see how different effects affect the response. The dijet sample that is used in this study provides a full view of the phase space in pT and |eta|, with analysis covering both quark and gluon dominated regions of the space. These studies can then be continued with similar investigations of other samples, with the possibility of using the combined results as part of the calibration chain.
  • Virta, Maxim (2022)
    Strongly coupled matter called quark–gluon plasma (QGP) is formed in heavy-ion collisions at RHIC [1, 2] and the LHC [3, 4]. The expansion of this matter, caused by pressure gradients, is known to be hydrodynamic expansion. The computations show that the expanding QGP has a small shear viscosity to entropy density ratio (η/s), close to the known lower bound 1/4π [5]. In such a medium one expects that jets passing through the medium would create Mach cones. Many experimental trials have been done but no proper evidence of the Mach cone has been found [6, 7]. Mach cones were thought to cause double-bumps in azimuthal correlations. However these were later shown to be the third flow harmonic. In this thesis a new method is proposed for finding the Mach cone with so called event-engineering. The higher order flow harmonics and their linear response are known to be sensitive to the medium properties [8]. Hence a Mach cone produced by high momentum jet would change the system properties and, thus, the observable yields. Different flow observables are then studied by selecting high energy jet events with different momentum ranges. Considered observables for different momenta are then compared to the all events. Found differences in the flow harmonics and their correlations for different jet momenta are reported showing evidence of Mach cone formation in the heavy-ion collisions. The observations for different jet momenta are then quantified with χ 2 -test to see the sensitivities of different observables to momentum selections.
  • Siilin, Kasper (2022)
    I use hydrodynamic cosmological N-body simulations to study the effect that a secondary period of inflation, driven by a spectator field, would have on the Local Group substructures. Simulations of the Local Group have been widely adopted for studying the nonlinear structure formation on small scales. This is essentially because detailed observations of faint dwarf galaxies are mostly limited to within the Local Group and its immediate surroundings. In particular, the ∼ 100 dwarf galaxies, discovered out to a radius of 3 Mpc from the Sun, constitute a sample that has the potential to discriminate between different cosmological models on small scales, when compared to simulations. The two-period inflaton-curvaton inflation model is one such example, since it gives rise to a small-scale cut-off in the ΛCDM primordial power spectrum, compared to the power spectrum of the ΛCDM model with single field power-law inflation. I investigate the substructures that form in a simulated analogue of the Local Group, with initial conditions that incorporate such a modified power spectrum. The most striking deviation, from the standard power-law inflation, is the reduction of the total number of subhalos, with v_max > 10 km/s, by a factor of ∼ 10 for isolated subhalos and by a factor of ∼ 6 for satellites. However, the reduction is mostly in the number of non-star-forming subhalos, and the studied model thus remains a viable candidate, taking into account the uncertainty in the Local Group total mass estimate. The formation of the first galaxies is also delayed, and the central densities of galaxies with v_max < 50 km/s are lowered: their circular velocities at 1 kpc from the centre are decreased and the radii of maximum circular velocity are increased. As for the stellar mass-metallicity and the stellar mass-halo mass relations, or the selection effects from tidal disruption, I find no significant differences between the models.
  • Lintuluoto, Adelina Eleonora (2021)
    At the Compact Muon Solenoid (CMS) experiment at CERN (European Organization for Nuclear Research), the building blocks of the Universe are investigated by analysing the observed final-state particles resulting from high-energy proton-proton collisions. However, direct detection of final-state quarks and gluons is not possible due to a phenomenon known as colour confinement. Instead, event properties with a close correspondence with their distributions are studied. These event properties are known as jets. Jets are central to particle physics analysis and our understanding of them, and hence of our Universe, is dependent upon our ability to accurately measure their energy. Unfortunately, current detector technology is imprecise, necessitating downstream correction of measurement discrepancies. To achieve this, the CMS experiment employs a sequential multi-step jet calibration process. The process is performed several times per year, and more often during periods of data collection. Automating the jet calibration would increase the efficiency of the CMS experiment. By automating the code execution, the workflow could be performed independently of the analyst. This in turn, would speed up the analysis and reduce analyst workload. In addition, automation facilitates higher levels of reproducibility. In this thesis, a novel method for automating the derivation of jet energy corrections from simulation is presented. To achieve automation, the methodology utilises declarative programming. The analyst is simply required to express what should be executed, and no longer needs to determine how to execute it. To successfully automate the computation of jet energy corrections, it is necessary to capture detailed information concerning both the computational steps and the computational environment. The former is achieved with a computational workflow, and the latter using container technology. This allows a portable and scalable workflow to be achieved, which is easy to maintain and compare to previous runs. The results of this thesis strongly suggest that capturing complex experimental particle physics analyses with declarative workflow languages is both achievable and advantageous. The productivity of the analyst was improved, and reproducibility facilitated. However, the method is not without its challenges. Declarative programming requires the analyst to think differently about the problem at hand. As a result there are some sociological challenges to methodological uptake. However, once the extensive benefits are understood, we anticipate widespread adoption of this approach.
  • Räsänen, Juska (2021)
    Coronal mass ejections (CMEs) are large-scale eruptions of plasma entrained in a magnetic field. They occur in the solar corona, and from there they propagate into interplanetary space along with the solar wind. If a CME travels faster than the surrounding solar wind, a shock wave forms. Shocks driven by CMEs can act as powerful accelerators of charged particles. When charged particles like electrons are accelerated, they emit electromagnetic radiation, especially in the form of radio waves. Much of the radio emission from CMEs comes in the form of solar radio bursts. Traditionally solar radio bursts are classified into five types, called type I–V bursts, based on their characteristics and appearance in a dynamic spectrum. Of these five types of bursts, especially type II radio bursts are believed to be signatures of shock waves in the corona and interplanetary space. There are, however, also radio bursts associated with CMEs and shocks that do not fit the description of any of the five standard types of radio bursts. In this thesis three moving radio bursts associated with a CME that erupted on May 22, 2013 are identified and studied in detail. The characteristics of the bursts do not match those of the usual five types of solar radio bursts. The aim of the work is to ascertain the emission mechanism that causes the observed radio bursts, as well as locate the sites of electron acceleration that are the sources of the emission. The kinematics and the spectral features of the emission are studied in order to find answers to these questions. Analysis of the spectral features of the moving bursts showed that the bursts were emitted via plasma emission. Analysis of the kinematics revealed that the moving radio bursts originated unusually high up in the corona from the northern flank of the CME. The CME studied in this work was preceded by another one which erupted some hours earlier, and the disturbed coronal environment likely caused the radio emission to be emitted from an unusual height. It was found that the bursts likely originated from electrons accelerated at the shock driven by the CME.
  • Bieleviciute, Auguste (2023)
    The high luminosity upgrade of the Large Hadron Collider (LHC) will result in higher collision rates and current equipment is not up to par with this future era of operations. Identification and reconstruction of hard interactions may be hampered by the spatial overlapping of particle tracks and energy deposits from additional collisions and this often leads to false triggers. In addition, current particle detectors suffer from radiation damage that severely affects the accuracy of our results. The new minimum ionizing particle (MIP) timing detector will be equipped with low gain avalanche detectors which have a comparably small timing resolution that helps with the track reconstruction and their thin design limits the radiation damage over time. In this thesis, we build an experimental set-up in order to study the timing resolution of these detectors closely. In order to find the timing resolution, we take the time difference between the signals from two detectors and put it in a histogram to which we apply a Gaussian fit. The standard deviation of this Gaussian is called the time spread from which we can estimate the timing resolution. We first build, characterize and improve our experimental set-up using reference samples with known timing resolution until our set-up is able to reproduce the reference value. Then we repeat the measurements with irradiated samples in order to study how radiation damage impacts timing. We were able to adjust our setup with reference samples until we measured a timing resolution of 33$\pm$2~ps. We use this result to calculate the timing resolution of an irradiated sample ($8.0 \times 10^{14}$ proton fluence) and we found a timing resolution of 62$\pm$2~ps. This thesis also discusses the analysis of the data and how the data can be re-analyzed to try to improve the final result.