Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by study line "Particle Physics and Cosmology"

Sort by: Order: Results:

  • Nincă, Ilona Ştefana (2020)
    Cadmium Telluride (CdTe) has a high quantum efficiency and a bandgap of 1.44 eV. As a consequence, it is being used to efficiently detect gamma rays. The aim of this thesis is to explore the properties of the CdTe pixelated detector and the procedures conducted in order to fine-tune the electronic readout system. A fully functional CdTe detector would be useful in medical imaging techniques such as Boron Neutron Capture Therapy (BNCT). BNCT requires a detector with a good energy resolution, a good timing resolution and a good stopping power. Although the CdTe crystal is a promising material, its growing process is difficult due to the fact that different types of defects appear inside the crystal. The quality assurance process has to be thorough in order for suitable crystals to be found. An aluminum oxide layer (Al2O3) was passivated onto the surface of the crystal. The contacts for both sides were created using Titanium Tungsten (TiW) and gold (Au) sputtering deposition, followed by an electroless nickel growth. I tested the CdTe pixelated detector with different radioactive sources such as Am-241, Ba-133, Co-57, Cs-137 and X-ray quality series in order to study the sensitivity of the device and its capacity to detect gamma and X-rays.
  • Nurminen, Niilo Waltteri (2021)
    Phase transitions in the early Universe and in condensed matter physics are active fields of research. During these transitions, objects such as topological solitons and defects are produced by the breaking of symmetry. Studying such objects more thoroughly could shed light on some of the modern problems in cosmology such as baryogenesis and explain many aspects in materials research. One example of such topological solitons are the (1+1) dimensional kinks and their respective higher dimensional domain walls. The dynamics of kink collisions are complicated and very sensitive to initial conditions. Making accurate predictions within such a system has proven to be difficult, and research has been conducted since the 70s. Especially difficult is predicting the location of resonance windows and giving a proper theoretical explanation for such a structure. Deeper understanding of these objects is interesting in its own right but can also bring insight in predicting their possibly generated cosmological signatures. In this thesis we have summarized the common field theoretic tools and methods for the analytic treatment of kinks. Homotopy theory and its applications are also covered in the context of classifying topological solitons and defects. We present our numerical simulation scheme and results on kink-antikink and kink-impurity collisions in the $\phi^4$ model. Kink-antikink pair production from a wobbling kink is also studied, in which case we found that the separation velocity of the produced kink-antikink pair is directly correlated with the excitation amplitude of the wobbling kink. Direct annihilation of the produced pair was also observed. We modify the $\phi^4$ model by adding a small linear term $\delta \phi^3$, which modifies the kinks into accelerating bubble walls. The collision dynamics and pair production of these objects are explored with the same simulation methods. We observe multiple new effects in kink-antikink collisions, such as potentially perpetual bouncing and faster bion formation in comparison to the $\phi^4$ model. We also showed that the $\delta$ term defines the preferred vacuum by inevitably annihilating any kink-antikink pair. During pair production we noticed a momentum transfer between the produced bion and the original kink and that direct annihilation seems unlikely in such processes. For wobbling kink - impurity collisions we found an asymmetric spectral wall. Future research prospects and potential expansions for our analysis are also discussed.
  • Gonzalez Ateca, Marcos (2020)
    The distribution of matter in space is not homogeneous. Large structures such as galaxy groups, clusters or big empty spaces called voids can be observed at large scales in the Universe. The large scale structure of the Universe will depend on both the cosmological parameters and the dynamics of galaxy formation and evolution. One of the main observables that allow us to quantify this structure is the two-point correlation function, with which we can trace different galaxy properties such as luminosity, stellar mass and also, it enables us to track its evolution with redshift. In galaxy surveys, we do not obtain the location of galaxies in real space. We obtain our data in what it is called redshift space. This redshift space can be defined as a distortion of the real space generated by the redshift introduced by the peculiar velocities of galaxies and from the Hubble expansion of the Universe. Therefore, the distribution of galaxies in redshift space will look different from the one obtained in real space. These differences between both spaces are small but not negligible, and they depend strictly on the cosmology. In this work, we will assume a ΛCDM cosmology. Therefore, in order to find the different 1-dimensional or 2-dimensional correlations functions, we will use the most updated version of the code provided by the Euclid consortium, which belongs officially to the ESA Euclid mission. Moreover, we will also need different galaxy catalogues. These catalogues have already been simulated and they are called Minerva mocks, which are a set of 300 different cosmological mocks produced with N-body simulations. Finally, as there is a well-defined relation between real and redshift space, one could also assume that there is a relation between the two-point correlation functions in both real and redshift space. In this project, we will prove that the real-space one-dimensional two-point correlation function, which is the physically meaningful one, can be derived from the two-dimensional two-point correlation function in redshift space following a geometrical procedure independent of approximations. This method, in theory, should work for all distance scales.
  • Annala, Jaakko (2020)
    We study how higher-order gravity affects Higgs inflation in the Palatini formulation. We first review the metric and Palatini formulations in comparative manner and discuss their differences. Next cosmic inflation driven by a scalar field and inflationary observables are discussed. After this we review the Higgs inflation and compute the inflationary observables both in the metric and Palatini formulations. We then consider adding higher-order terms of the curvature to the action. We derive the equations of motion for the most general action quadratic in the curvature that does not violate parity in both the metric and Palatini formulations. Finally we present a new result. We analyse Higgs inflation in the Palatini formulation with higher-order curvature terms. We consider a simplified scenario where only terms constructed from the symmetric part of the Ricci tensor are added to the action. This implies that there are no new gravitational degrees of freedom, which makes the analysis easier. As a new result we found out that the scalar perturbation spectrum is unchanged, but the tensor perturbation spectrum is suppressed by the higher-order curvature couplings.
  • Pankkonen, Joona (2020)
    The Standard Model is one of the accurate theories that we have. It has demonstrated its success by predictions and discoveries of new particles such as the existence of gauge bosons W and Z and heaviest quarks charm, bottom and top. After discovery of the Higgs boson in 2012 Standard Model became complete in sense that all elementary particles contained in it had been observed. In this thesis I will cover the particle content and interactions of the Standard Model. Then I explain Higgs mechanism in detail. The main feature in Higgs mechanism is spontaneous symmetry breaking which is the key element for this mechanism to work. The Higgs mechanism gives rise to mass of the particles, especially gauge bosons. Higgs boson was found at the Large Hadron Collider by CMS and ATLAS experiments. In the experiments, protons were collided with high energies (8-13 TeV). This leads to production of the Higgs boson by different production channels like gluon fusion (ggF), vector boson fusion (VBF) or the Higgsstrahlung. Since the lifetime of the Higgs boson is very short, it cannot be measured directly. In the CMS experiment Higgs boson was detected via channel H → ZZ → 4l and via H → γγ. In this thesis I examine the correspondence of the Standard Model to LHC data by using signal strengths of the production and decay channels by parametrizing the interactions of fermionic and bosonic production and decay channels. Data analysis carried by least squares method gave confidence level contours that describe how well the predictions of the Standard Model correspond to LHC data
  • Berlea, Vlad Dumitru (2020)
    The nature of dark matter (DM) is one of the outstanding problems of modern physics. The existence of dark matter implies physics beyond the Standard Model (SM), as the SM doesn’t contain any viable DM candidates. Dark matter manifests itself through various cosmological and astrophysical observations of the rotational speeds of galaxies, structure formation, measurements of the Cosmic Microwave Background (CMB) and gravitational lensing of galaxy clusters. An attractive explanation of the observed dark matter density is provided by the WIMP (Weakly Interacting Massive Particle) paradigm. In the following thesis I explore this idea within the well motivated Higgs portal framework. In particular, I explore three options for dark matter composition: a scalar field and U(1) and SU(2) hidden gauge Fields. I find that the WIMP paradigm is still consistent with the data. Even though it finds itself under pressure from direct detection experiments, it is not yet in crisis. Simple and well motivated WIMP models can fit the observed DM density without violating the collider and direct DM detection constraints.
  • Molander, Andreas (2020)
    The Standard Model (SM) is the best established theory describing the observed matter and its interactions through all the fundamental forces except gravity. The SM is however not complete. For example, it does not explain the large difference between the electroweak scale and the Planck scale, which is known as the hierarchy problem, nor does it explain dark matter. Therefore there is a need for more comprehensive theories beyond the SM. Supersymmetry (SUSY) extends the SM with predictions of a partner particle (sparticle) for each currently known elementary particle. A few of its benefits are that it gives an explanation to the hierarchy problem and predicts the existence of a good particle candidate for dark matter. However, there is no experimental evidence for SUSY so far. The search for SUSY particles is currently on-going at the experiments using the Large Hadron Collider (LHC) at CERN. So far, the searches have been focusing on strongly interacting supersymmetric particles, still without findings. One of the parameter ranges still to be covered, is the compressed mass scenario in the lower mass end for weakly interacting sparticles, where the masses of the lightest and second lightest supersymmetric particle do not differ much in mass. If they exist, low mass SUSY particles could be created in the LHC from two fusing photons emitted by forward-scattered protons. In such two-photon (central exclusive) processes, both protons might remain on-shell and continue their path down the beamline. Central exclusive processes are rather rare, so to advance the study of these events, new tagging techniques are required to record as many of these events as possible. We are interested in the kinematic range with a mass difference of less than 60 GeV between the slepton and the neutralino, which are the supersymmetric partners of the lepton and the neutral bosons. The CMS detector in the LHC has two event filtering (trigger) systems; the low level (L1) trigger and the high level trigger (HLT). A study has been conducted on how a specific HLT could increase the number of recorded events for the previously mentioned process, without significantly increasing the total HLT rate. To select more events, the transverse momentum threshold value of the produced leptons ought to be lowered. The forward-scattered protons will be detected by the Precision Proton Spectrometer (PPS). This thesis shows that requiring proton tracks in the PPS tracking detectors and tuning the multiplicity cut of these, will compensate for the lowering of the transverse momentum threshold, keeping the overall HLT rate sensible, while still enabling more interesting physics to be recorded.
  • Stendahl, Alex (2020)
    The Standard model of particle physics has been very successful in describing particles and their interactions. In 2012 the last missing piece, the Higgs boson, was discovered at the Large Hadron Collider. However even for all its success the Standard model fails to explain some phenomena of nature. Two of these unexplained phenomena are dark matter and the metastability of the electroweak vacuum. In this thesis we study one of the simplest extensions of the Standard model; the complex singlet scalar extension. In this framework the CP-even component of the singlet mixes with the Standard model like Higgs boson through the portal operator to form new mass eigenstates. The CP-odd component is a pseudo-Goldstone boson which could be a viable dark matter candidate. We analyse parameter space of the model with respect to constraints from particle physics experiments and cosmological observations. The time evolution of dark matter number density is derived to study the process of dark matter freeze-out. The relic density of the Dark Matter candidate is then calculated with the micrOmegas tool. These calculations are then compared to the measured values of dark matter relic density. Moreover, the electroweak vacuum can be stabilised due the contribution of the singlet scalar to the Standard Model Higgs potential. We derive the β-functions of the couplings in order to study the renormalisation group evolution of the parameters of the model. With the contribution of the portal coupling to the β-function of the Higgs coupling we are able to stabilise the electroweak vacuum up to the Planck scale. The two-loop β-functions are calculated using the SARAH tool.
  • Rantanen, Milla-Maarit (2020)
    Semiconductor radiation detectors are devices used to detect electromagnetic and particle radiation. The signal formation is based on the transportation of charges between the valence band and conduction band. The interaction between the detector material and the radiation generates free electrons and holes that move in opposite directions in the electric field applied between the electrodes. The movement of charges induces a current in the external electrical circuit, which can be used for particle identification, measurement of energy or momentum, timing, or tracking. There are several different detector materials and designs and, new options are continuously developed. Diamond is a detector material that has received a great amount of interest in many fields. This is due to its many unique properties. Many of them arise from the diamond crystal structure and the strength of the bond between the carbon atoms. The tight and rigid structure makes diamond a strong and durable material, which allows operation of diamond detectors in harsh radiation environments. This, combined with the fast signal formation and short response time makes diamond detector an excellent choice for high energy physics applications. The diamond structure leads also to a wide band gap. Thanks to the wide band bap, diamond detectors have low leakage current and they can be operated even in high temperatures without protection from surrounding light. Especially electrical properties of semiconductors strongly depend on the concentration of impurities and crystal defects. Determination of electrical properties can therefore be used to study the crystal quality of the material. The electrical properties of the material determine the safe operational region of the device and knowledge of the leakage current and the charge carrier transportation mechanism are required for optimized operation of detectors. Characterization of electrical properties is therefore an important part of semiconductor device fabrication. Electrical characterization should be done at different stages of the fabrication in order to detect problems at an early stage and to get an idea of what could have caused them. This work describes the quality assurance process of single crystal CVD (chemical vapour deposition) diamond detectors for the PPS-detectors for the CMS-experiment. The quality assurance process includes visual inspection of the diamond surfaces and dimensions by optical and cross polarized light microscopy, and electrical characterization by measurement of leakage current and CCE (charge collection efficiency). The CCE measurement setup was improved with a stage controller, which allows automatic measurement of CCE in several positions on the diamond detector. The operation of the new setup and the reproducibility of the results were studied by repeated measurements of a reference diamond. The setup could successfully be used to measure CCE over the whole diamond surface. However, the measurement uncertainty is quite large. Further work is needed to reduce the measurement uncertainty and to determine the correlation between observed defects and the measured electrical properties.
  • Veltheim, Otto (2022)
    The measurement of quantum states has been a widely studied problem ever since the discovery of quantum mechanics. In general, we can only measure a quantum state once as the measurement itself alters the state and, consequently, we lose information about the original state of the system in the process. Furthermore, this single measurement cannot uncover every detail about the system's state and thus, we get only a limited description of the system. However, there are physical processes, e.g., a quantum circuit, which can be expected to create the same state over and over again. This allows us to measure multiple identical copies of the same system in order to gain a fuller characterization of the state. This process of diagnosing a quantum state through measurements is known as quantum state tomography. However, even if we are able to create identical copies of the same system, it is often preferable to keep the number of needed copies as low as possible. In this thesis, we will propose a method of optimising the measurements in this regard. The full description of the state requires determining multiple different observables of the system. These observables can be measured from the same copy of the system only if they commute with each other. As the commutation relation is not transitive, it is often quite complicated to find the best way to match the observables with each other according to these commutation relations. This can be quite handily illustrated with graphs. Moreover, the best way to divide the observables into commuting sets can then be reduced to a well-known graph theoretical problem called graph colouring. Measuring the observables with acceptable accuracy also requires measuring each observable multiple times. This information can also be included in the graph colouring approach by using a generalisation called multicolouring. Our results show that this multicolouring approach can offer significant improvements in the number of needed copies when compared to some other known methods.
  • Virta, Maxim (2022)
    Strongly coupled matter called quark–gluon plasma (QGP) is formed in heavy-ion collisions at RHIC [1, 2] and the LHC [3, 4]. The expansion of this matter, caused by pressure gradients, is known to be hydrodynamic expansion. The computations show that the expanding QGP has a small shear viscosity to entropy density ratio (η/s), close to the known lower bound 1/4π [5]. In such a medium one expects that jets passing through the medium would create Mach cones. Many experimental trials have been done but no proper evidence of the Mach cone has been found [6, 7]. Mach cones were thought to cause double-bumps in azimuthal correlations. However these were later shown to be the third flow harmonic. In this thesis a new method is proposed for finding the Mach cone with so called event-engineering. The higher order flow harmonics and their linear response are known to be sensitive to the medium properties [8]. Hence a Mach cone produced by high momentum jet would change the system properties and, thus, the observable yields. Different flow observables are then studied by selecting high energy jet events with different momentum ranges. Considered observables for different momenta are then compared to the all events. Found differences in the flow harmonics and their correlations for different jet momenta are reported showing evidence of Mach cone formation in the heavy-ion collisions. The observations for different jet momenta are then quantified with χ 2 -test to see the sensitivities of different observables to momentum selections.
  • Siilin, Kasper (2022)
    I use hydrodynamic cosmological N-body simulations to study the effect that a secondary period of inflation, driven by a spectator field, would have on the Local Group substructures. Simulations of the Local Group have been widely adopted for studying the nonlinear structure formation on small scales. This is essentially because detailed observations of faint dwarf galaxies are mostly limited to within the Local Group and its immediate surroundings. In particular, the ∼ 100 dwarf galaxies, discovered out to a radius of 3 Mpc from the Sun, constitute a sample that has the potential to discriminate between different cosmological models on small scales, when compared to simulations. The two-period inflaton-curvaton inflation model is one such example, since it gives rise to a small-scale cut-off in the ΛCDM primordial power spectrum, compared to the power spectrum of the ΛCDM model with single field power-law inflation. I investigate the substructures that form in a simulated analogue of the Local Group, with initial conditions that incorporate such a modified power spectrum. The most striking deviation, from the standard power-law inflation, is the reduction of the total number of subhalos, with v_max > 10 km/s, by a factor of ∼ 10 for isolated subhalos and by a factor of ∼ 6 for satellites. However, the reduction is mostly in the number of non-star-forming subhalos, and the studied model thus remains a viable candidate, taking into account the uncertainty in the Local Group total mass estimate. The formation of the first galaxies is also delayed, and the central densities of galaxies with v_max < 50 km/s are lowered: their circular velocities at 1 kpc from the centre are decreased and the radii of maximum circular velocity are increased. As for the stellar mass-metallicity and the stellar mass-halo mass relations, or the selection effects from tidal disruption, I find no significant differences between the models.
  • Lintuluoto, Adelina Eleonora (2021)
    At the Compact Muon Solenoid (CMS) experiment at CERN (European Organization for Nuclear Research), the building blocks of the Universe are investigated by analysing the observed final-state particles resulting from high-energy proton-proton collisions. However, direct detection of final-state quarks and gluons is not possible due to a phenomenon known as colour confinement. Instead, event properties with a close correspondence with their distributions are studied. These event properties are known as jets. Jets are central to particle physics analysis and our understanding of them, and hence of our Universe, is dependent upon our ability to accurately measure their energy. Unfortunately, current detector technology is imprecise, necessitating downstream correction of measurement discrepancies. To achieve this, the CMS experiment employs a sequential multi-step jet calibration process. The process is performed several times per year, and more often during periods of data collection. Automating the jet calibration would increase the efficiency of the CMS experiment. By automating the code execution, the workflow could be performed independently of the analyst. This in turn, would speed up the analysis and reduce analyst workload. In addition, automation facilitates higher levels of reproducibility. In this thesis, a novel method for automating the derivation of jet energy corrections from simulation is presented. To achieve automation, the methodology utilises declarative programming. The analyst is simply required to express what should be executed, and no longer needs to determine how to execute it. To successfully automate the computation of jet energy corrections, it is necessary to capture detailed information concerning both the computational steps and the computational environment. The former is achieved with a computational workflow, and the latter using container technology. This allows a portable and scalable workflow to be achieved, which is easy to maintain and compare to previous runs. The results of this thesis strongly suggest that capturing complex experimental particle physics analyses with declarative workflow languages is both achievable and advantageous. The productivity of the analyst was improved, and reproducibility facilitated. However, the method is not without its challenges. Declarative programming requires the analyst to think differently about the problem at hand. As a result there are some sociological challenges to methodological uptake. However, once the extensive benefits are understood, we anticipate widespread adoption of this approach.
  • Räsänen, Juska (2021)
    Coronal mass ejections (CMEs) are large-scale eruptions of plasma entrained in a magnetic field. They occur in the solar corona, and from there they propagate into interplanetary space along with the solar wind. If a CME travels faster than the surrounding solar wind, a shock wave forms. Shocks driven by CMEs can act as powerful accelerators of charged particles. When charged particles like electrons are accelerated, they emit electromagnetic radiation, especially in the form of radio waves. Much of the radio emission from CMEs comes in the form of solar radio bursts. Traditionally solar radio bursts are classified into five types, called type I–V bursts, based on their characteristics and appearance in a dynamic spectrum. Of these five types of bursts, especially type II radio bursts are believed to be signatures of shock waves in the corona and interplanetary space. There are, however, also radio bursts associated with CMEs and shocks that do not fit the description of any of the five standard types of radio bursts. In this thesis three moving radio bursts associated with a CME that erupted on May 22, 2013 are identified and studied in detail. The characteristics of the bursts do not match those of the usual five types of solar radio bursts. The aim of the work is to ascertain the emission mechanism that causes the observed radio bursts, as well as locate the sites of electron acceleration that are the sources of the emission. The kinematics and the spectral features of the emission are studied in order to find answers to these questions. Analysis of the spectral features of the moving bursts showed that the bursts were emitted via plasma emission. Analysis of the kinematics revealed that the moving radio bursts originated unusually high up in the corona from the northern flank of the CME. The CME studied in this work was preceded by another one which erupted some hours earlier, and the disturbed coronal environment likely caused the radio emission to be emitted from an unusual height. It was found that the bursts likely originated from electrons accelerated at the shock driven by the CME.