Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by discipline "Teoretisk fysik"

Sort by: Order: Results:

  • Sarkkinen, Miika (2018)
    A memory effect is a net change in matter distribution due to radiation. It is a classically observable effect that takes place in the asymptotic region of spacetime. The study of memory effects started in gravitational physics where the effect is manifested as a permanent displacement in a configuration of test particles due to gravitational waves. Recently, analogous effects have been studied in the context of gauge theories. This thesis is focused on the memory effect present in electrodynamics. The study starts by a discussion on the fundamental aspects of electrodynamics as U(1) gauge invariant theory. Next, the tools of conformal compactification and Penrose diagram of Minkowski space are introduced. After these preliminaries, the electromagnetic analog of gravitational-wave memory, first analyzed by L. Bieri and D. Garfinkle, is studied in detail. Starting with Maxwell's equations, a partial differential equation is derived, in which the two-sphere divergence of the memory vector depends on the total charge flux F that reaches the null infinity and the initial and final values of the radial component of the electric field. The memory vector is then found to consist of two parts: the ordinary memory vector and the null memory vector. The solution of Bieri and Garfinkle for the null memory vector is reproduced by expanding the flux F in terms of spherical harmonics. Finally, the connection between the electromagnetic memory effect and the so-called asymptotic symmetries of U(1) gauge theory is analyzed. The memory effect is found to determine a large gauge transformation (LGT) in which the gauge parameter becomes a function of angles at null infinity. Since a LGT is a local symmetry of U(1) theory, there must be a conserved Noether current and Noether charge associated with it. As the memory effect generates a LGT, it is natural to expect a connection between the memory effect and the Noether charge. The study thus culminates in an equation that relates the conserved charge to the memory effect.
  • Ala-Lahti, Matti (2017)
    Mirror mode waves arise from the antiphase, low frequency fluctuations of the magnetic field and plasma density when the energy is conserved and a sufficient temperature anisotropy is present in the plasma. These waves are linearly polarized and they are frequently observed in heliospheric plasma, in particular in different sheath structures. They are the most widely studied in the planetary magnetosheaths, but also found in cometosheaths and in the heliosheath. In addition, mirror mode waves are reported to also occur in CME-driven sheaths. The knowledge of mirror modes in CME-driven sheaths is, however, very limited despite of the fact that they might contribute to regulating the CME sheath plasma on a global scale, and also a ect the geoeffectivity of CME-driven sheaths as well as the modulation and acceleration of energetic particles. As of yet, no statistical studies of mirror modes in CME-driven sheaths exist. In this thesis, a background to the basic physical plasma phenomena and structures in the heliosphere is given by briegly discussing the solar wind, interplanetary shocks and sheath regions. The central focus of this thesis is however on CME-driven sheath regions and the mirror mode wave occurrence in them. CME-driven sheaths are turbulent plasma regions between the CME ejecta and its preceding interplanetary shock. This thesis discusses the differences between CME-driven sheaths and other heliospheric sheaths. In addition, mirror modes are considered in detail by presenting the theory of mirror instability in both fluid and kinetic descriptions and by discussing the fundamental features of mirror modes in other heliospheric sheaths regions. The previous studies of mirror modes and the methods applied in them are also widely presented. A program that identifies mirror mode structures from the magnetic field data of a spacecraft is constructed for this thesis. In the identification process, the program applies the linear polarization of mirror modes and the knowledge of the angular change of the magnetic field direction across a mirror mode structure. This new, almost fully automatic program combines previous mirror mode identification methods in a novel way, thus creating a new method for detecting and studying mirror modes in CME-driven sheath regions, as well as in other sheath regions. In this thesis, the constructed program is applied to perform a statistical study of mirror mode waves in CME-driven sheaths. Mirror modes were discovered to be common structures, but similar to the planetary magnetosheaths. They occupy only a relatively small part of the CME sheath. The results show that in CME-driven sheaths mirror modes are generally low amplitude structures that typically occur as trains of two or three mirror mode waves. In addition, the sheath plasma was noted to have notable temperature anisotropies, being generally mirror unstable when mirror modes were detected. The properties of the preceding shock of a CME-driven sheath were deduced to a ect mirror mode occurrence and the shock compression was concluded to provide a source of free energy.
  • Seppälä, Anniina (2013)
    Montmorillonite is a layered swelling clay mineral that has the abilities to absorb water, causing the mineral to swell, and to exchange its structural cations, most commonly Na^+ and Ca^{2+}. These properties are applied in various fields including the nuclear waste management in Finland. Montmorillonite is the main component of bentonite clay which is planned to be used as a release barrier material in the final repository for spent nuclear fuel. The aim of this work was to study how water is absorbed into the interlayer spaces of Na-montmorillonite. Molecular dynamics simulations were performed on a 3-layered montmorillonite particle surrounded by free water. The amount of water initially present between the layers was varied from none to 1 and 2 water molecules per unit cell. The simulations were performed at two temperatures, 298 K and 323 K, applying CLAYFF force field. The evolution of water content showed practically no absorption at either temperature in the case of completely dry montmorillonite. For the other cases, montmorillonite with water initially present in the interlayers, absorption was observed and it was faster at the higher temperature. The evolution of interlayer thicknesses in each case showed a variation between the two interlayers of the system which was thought to result from the different placement of substitutions in the clay layers.
  • Ilmola, Roni (2015)
    Surface growth by using nanocluster deposition has attracted a lot of attention in recent years due to possibilities to affect electronic properties of the resulting thin films. Industry is interested in this method because with cluster deposition it is possible to manufacture thin films much faster than by using single atom deposition. In some cases, nanocluster deposition is the only method by which thin films have been able to be deposited successfully. I have studied Si20 cluster deposition on the Si(0 0 1) surface. I used molecular dynamics simulations to simulate epitaxial silicon growth at temperatures 300 K, 500 K, 700 K, 1000 K, 1300 K and 1600 K. I used two potential models to do this, the Tersoff and the Stillinger-Weber potentials. This work focuses on the differences in the results of these potential models at various temperatures. All the atoms in the cluster had 1 eV of energy. I observed that the growth is stronger with the Stillinger-Weber potential almost at every temperature. At 300 K no epitaxial growth was seen and at 1600 K the substrate melted. I observed almost complete epitaxial growth with the Stillinger-Weber potential, whereas with the Tersoff potential there was an amorphous layer on top of the crystalline region. The epitaxial growth didn't originate from the diffusion as much as from the rearrangement of atoms at the amorphous-crystalline interface.
  • Jantunen, Ville (2017)
    We study the efficiency and theory behind various Markov chain Monte Carlo update methods (later MCMC) in the classical Heisenberg model in three dimensions. Classical Heisenberg model is a model in statistical physics that describes ferromagnetic phenomena. Classical Heisenberg model is a generalization of the Ising model where the spin is three dimensional unit vector instead of scalar -1 or 1. Both models show a second order phase transition which is the main reason we are interested in these models. The transition in our case describes the loss of magnetization of a ferromagnet as it is heated to its Curie temperature. Monte Carlo simulating the Classical Heisenberg model uses the same MCMC update methods as the Ising model. We introduce the theoretical background of Metropolis, Overrelaxation, and Wolff single cluster updates and study the dynamic critical exponents of Metropolis and Wolff updates. Results of this study are as expected: metropolis suffers from critical slowing down near the phase transition temperature and the autocorrelation time scales to L^5.08 , where L^3 is the size of the lattice. Wolff single cluster update avoids critical slowing down and scales very well with autocorrelation time scaling to L^3.28 . Even though Wolff update scales much better it has its downsides. Parallelization is a very important factor in modern scientific computing and Wolff update is very tricky to parallelize where as Metropolis parallelizes very well. Usually we can avoid this problem by running multiple instances of the simulation at the same time as varying simulation parameters such as the temperature. For large lattices the problems is that getting moderate results requires a lot of time.
  • Hauru, Markus (2013)
    This thesis reviews the multiscale entanglement renormalisation ansatz or MERA, a numerical tool for the study of quantum many-body systems and a discrete realisation of the AdS/CFT duality. The thesis covers an introduction to the necessary background concepts of entanglement, entanglement entropy and tensor network states, the structure and main features of MERA and its applications in condensed matter theory and holography. Also covered are details on the algorithmic implementation of MERA and some of its generalisations and extensions. MERA belongs to a class of variational ansatze for quantum many-body states known as tensor network states. It is especially well-suited for the study of scale invariant critical points. MERA is based on a real-space renormalisation group procedure called entanglement renormalisation, designed to systematically handle entanglement at different length scales along the coarse-graining flow. Entanglement renormalisation has be used for example to efficiently describe Kitaev states of the toric code, the prime example of topological order, and numerically study the ground state of the highly frustrated spin-1/2 Heisenberg model on a kagome lattice and various other one- and two-dimensional lattice models. The geometric and causal structure of MERA, which underlies its effectiveness as a numerical tool, also makes it a discrete version of the AdS/CFT duality. This duality describes a conformal field theory by a gravity theory in a higher dimensional space, and vice versa. The duality is manifest in the scaling of entanglement entropy in MERA, which is governed by a law highly analogous to the Ryu-Takayanagi formula for holographic entanglement entropy, in the connection between thermal states and a black-hole-like MERA and in the connection between correlation functions and holographic geodesics in a scale invariant MERA. The aim of this thesis is to lead the reader to an understanding of what MERA is, how it works and how it can be used. MERA's core features and uses are presented in a comprehensive and explicit way, and a broad view of possible applications and further directions is given. Plenty of references are also offered to direct the reader to further research on how MERA may relate to his/her interests.
  • Kärkkäinen, Timo (2013)
    Neutrino oscillation is a particle physics phenomenon, where neutrino flavour is not conserved. The phenomenon was conjectured during the 1950s by Pontecorvo and confirmed during the 1990s by Super-Kamiokande collaboration. Consequently, neutrinos must have Dirac or Majorana mass and a relevant mass term must be included in standard model. Neutrino oscillation is the first confirmed beyond standard model phenomenon. It leads to nonconservation of quantum numbers L_e, L_μ and L_τ. Currently the scientific community has detected three different neutrinos, but has failed in designating the mass hierarchy and absolute mass of them. In addition, charge-parity symmetry violation (CP violation) is expected, but yet unconfirmed in the neutrino oscillation. This thesis includes a brief historical journey to neutrino physics and a lengthy discussion of electroweak sector of standard model (Glashow–Weinberg–Salam theory), with detailed phenomenology of neutrino oscillations. GLoBES simulation program and its partner AEDL language is introduced. Experiment definition methods in AEDL are covered extensively. The most important parameters are neutrino flux, source power, target mass and baseline length. Statistical methods are represented briefly. Main tool is χ2-test. Neutrino sources are assumed to be 700 kW SPS at CERN, Switzerland, 450 kW particle accelerator Protvino, Russia and 5 MW particle accelerator at Lund, Sweden. The target is LAGUNA detector at Pyhäsalmi mine, Finland. Using specifications of LAGUNA detector currently on drawing board and SPS as the neutrino source, the confidence limits for determining neutrino mass hierarchy and discovering nonzero CP violation are calculated. Mass hierarchy is almost conclusively determined, most of the δ_CP parameter space exceed the 5σ limit, which is considered the limit for a confirmed scientific discovery. CP violation discovery is confirmed within 5σ limit with 70 % of δ_CP parameter space. Including both the SPS and Protvino accelerator neutrino fluxes, the covered parameter space is increased significantly with both mass hierarchy determination and CP violation discovery. Including also Lund accelerator neutrino flux, mass hierarchy is conclusively determined. CP violation discovery is confirmed within 5σ limit with 65 % of δ_CP parameter space and within 90 % limit with 85 % of δ_CP parameter space. Pyhäsalmi mine is 2288 km from CERN neutrino source. The baseline is very close to bimagic baseline 2540 km, which allows extremely good statistics and sensitivity of oscillation parameters. In conclusion, Pyhäsalmi mine should be given priority, when candidate sites are considered.
  • Aalto-Setälä, Laura (2014)
    In this work the connection between neutrino mass mechanisms and leptonic CP violation at collider experiments is studied. These subjects are connected on a fundamental level: the neutrino mass models dictate the form of the neutrino mass matrix; leptonic CP violation is expressed in the neutrino mass matrix by the Dirac and Majorana phases. The mass of neutrinos has been a subject of intensive study since the discovery of neutrino oscillation in the 1990s. This was the first, and still the only, observation that could not be explained by the standard model of particle physics. Neutrino mass mechanisms provide a way to extend the symmetry group of the standard model so that the neutrino masses are included. In addition to the tree-level seesaw mechanisms, four higher energy models, namely the minimal left-right symmetric model, the Littlest Higgs model, an SU(5) with an adjoint fermion, and the Altarelli-Feruglio model are reviewed. The first three extend the symmetry group of the standard model by a continuous symmetry, whereas the last extends it with a discrete flavor symmetry. It is possible that the seesaw mediators responsible for neutrino masses are within the energy reach of the LHC. Then the parameters of the neutrino mass matrix could be probed at collider experiments. One could determine the existence of the Dirac and Majorana phases by studying their effects on observable quantities of the channels including the seesaw mediators. These effects are reviewed in this work. The non-zero values of the phases would still not determine the existence of leptonic CP violation: generally, a CP odd phase can lead into CP even processes. It is concluded that the discovery potential of the Majorana phases is quite promising at the studied processes. For the Dirac phase, the effects are more subtle and its value will probably be determined at other experiments.
  • Taanila, Olli (Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2008)
    One of the unanswered questions of modern cosmology is the issue of baryogenesis. Why does the universe contain a huge amount of baryons but no antibaryons? What kind of a mechanism can produce this kind of an asymmetry? One theory to explain this problem is leptogenesis. In the theory right-handed neutrinos with heavy Majorana masses are added to the standard model. This addition introduces explicit lepton number violation to the theory. Instead of producing the baryon asymmetry directly, these heavy neutrinos decay in the early universe. If these decays are CP-violating, then they produce lepton number. This lepton number is then partially converted to baryon number by the electroweak sphaleron process. In this work we start by reviewing the current observational data on the amount of baryons in the universe. We also introduce Sakharov's conditions, which are the necessary criteria for any theory of baryogenesis. We review the current data on neutrino oscillation, and explain why this requires the existence of neutrino mass. We introduce the different kinds of mass terms which can be added for neutrinos, and explain how the see-saw mechanism naturally explains the observed mass scales for neutrinos motivating the addition of the Majorana mass term. After introducing leptogenesis qualitatively, we derive the Boltzmann equations governing leptogenesis, and give analytical approximations for them. Finally we review the numerical solutions for these equations, demonstrating the capability of leptogenesis to explain the observed baryon asymmetry. In the appendix simple Feynman rules are given for theories with interactions between both Dirac- and Majorana-fermions and these are applied at the tree level to calculate the parameters relevant for the theory.
  • Annala, Eemeli (2016)
    At the moment, neutrons stars are some of the few objects in the Universe which allow us to study the physics of cold dense matter. This matter is denser than atomic nuclei and the thermal energy of its particles is much smaller than their Fermi energy. This kind of research is possible because the mass-radius behavior of the neutron star population depends strongly on the equation of state of neutron star matter. Unfortunately, precise enough radius measurements have not been available until very recently when simultaneous mass-radius measurements were developed. Since this development is still going on, no-one has yet combined these new kinds of measurements with perturbative Quantum Chromodynamics (pQCD) calculations, for example. This thesis has two main functions: Firstly, we want to present the current knowledge about neutron stars and cold dense matter. Secondly, we want to introduce a Bayesian method which connects astronomical observations and current pQCD results by using polytropes. In this fashion, it should be possible to further restrict the behavior of the equation of state of cold dense matter at neutron star densities. Nevertheless, a detailed examination of this approach is left to future studies.
  • Oksanen, Markku (Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2008)
    The formulation of a quantum theory of gravitation has been a goal for theoretical physicists since the birth of quantum mechanics. Applying quantum mechanics to high energy processes in the framework of general relativity leads to the operational noncommutativity of spacetime coordinates. Noncommutative spacetime geometries are also obtained in open string theories at certain low energy limits. A theory of gravitation on noncommutative spacetime could be compatible with quantum mechanics and be able to capture the expected nonlocality of physics at very small distances and high energies, and also be able to reproduce Einstein's general relativity at long distances. In this work I investigate gravitation as a gauge theory of the Poincaré symmetry and I aim at generalizing this point of view to noncommutative spacetimes. First I review the important role of the Poincaré symmetry in relativistic physics and the derivation of the classical theory of gravitation as a gauge theory of the Poincaré symmetry. I continue by discussing noncommutative spaces and the formulation of quantum field theories on noncommutative spacetimes. The formulation of noncommutative gauge theories is explained carefully due to the local nature of gauge symmetries. Special emphasis is given to the twisted Poincaré symmetry, a new quantum symmetry of noncommutative spacetime, that is respected by these theories. Challenges encountered in the formulation of noncommutative gravitation and their solutions suggested in literature are discussed. I explain how all the approaches made so far lack the fundamental property of covariance under the general coordinate transformations, the cornerstone of general relativity. Finally, I study the possibility to generalize the twisted Poincaré symmetry to a local gauge symmetry in noncommutative spacetime — in the hope to obtain a noncommutative gauge theory of gravitation. I show that such a generalization cannot be achieved by deforming the Poincaré symmetry by a covariant twist element. Thus other approaches to noncommutative gravitation and the twisted Poincaré symmetry will have to be considered in the future.
  • Raasakka, Matti (Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2009)
    Our present-day understanding of fundamental constituents of matter and their interactions is based on the Standard Model of particle physics, which relies on quantum gauge field theories. On the other hand, the large scale dynamical behaviour of spacetime is understood via the general theory of relativity of Einstein. The merging of these two complementary aspects of nature, quantum and gravity, is one of the greatest goals of modern fundamental physics, the achievement of which would help us understand the short-distance structure of spacetime, thus shedding light on the events in the singular states of general relativity, such as black holes and the Big Bang, where our current models of nature break down. The formulation of quantum field theories in noncommutative spacetime is an attempt to realize the idea of nonlocality at short distances, which our present understanding of these different aspects of Nature suggests, and consequently to find testable hints of the underlying quantum behaviour of spacetime. The formulation of noncommutative theories encounters various unprecedented problems, which derive from their peculiar inherent nonlocality. Arguably the most serious of these is the so-called UV/IR mixing, which makes the derivation of observable predictions especially hard by causing new tedious divergencies, to which our previous well-developed renormalization methods for quantum field theories do not apply. In the thesis I review the basic mathematical concepts of noncommutative spacetime, different formulations of quantum field theories in the context, and the theoretical understanding of UV/IR mixing. In particular, I put forward new results to be published, which show that also the theory of quantum electrodynamics in noncommutative spacetime defined via Seiberg-Witten map suffers from UV/IR mixing. Finally, I review some of the most promising ways to overcome the problem. The final solution remains a challenge for the future.
  • Långvik, Miklos (Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2007)
    This masters thesis explores some of the most recent developments in noncommutative quantum field theory. This old theme, first suggested by Heisenberg in the late 1940s, has had a renaissance during the last decade due to the firmly held belief that space-time becomes noncommutative at small distances and also due to the discovery that string theory in a background field gives rise to noncommutative field theory as an effective low energy limit. This has led to interesting attempts to create a noncommutative standard model, a noncommutative minimal supersymmetric standard model, noncommutative gravity theories etc. This thesis reviews themes and problems like those of UV/IR mixing, charge quantization, how to deal with the non-commutative symmetries, how to solve the Seiberg-Witten map, its connection to fluid mechanics and the problem of constructing general coordinate transformations to obtain a theory of noncommutative gravity. An emphasis has been put on presenting both the group theoretical results and the string theoretical ones, so that a comparison of the two can be made.
  • Ala-Mattinen, Kalle (2018)
    We investigate a dark sector augmentation of the standard model with a keV-scale right-handed sterile neutrino field 𝑁 and a TeV-scale singlet scalar field 𝑆. The ("warm") dark matter candidate of the model is the sterile neutrino, which is produced from decays of singlet scalars. The dark sector is coupled to the standard model via the Higgs portal coupling between the singlet scalar and Higgs 𝐻. The momentum distribution function, which contains the full information about the production process, is obtained for sterile neutrinos by solving a system of Boltzmann equations. For simplicity, we assume the effective number of relativistic degrees of freedom to be constant during the dark matter production. We take into account several constraints from structure formation and cosmology, and find that even this simple model gives rise to a rich phenomenology. In particular, we find that, in addition to offering a realistic dark matter candidate, the sterile neutrino can attain a highly non-thermal momentum distribution in the process. A direct consequence of this is the inability to assess the impact on the structure formation with the usual estimators that require a thermal momentum distribution, such as the free-streaming length. We rely on [72], which uses the linear power spectrum, instead of the free-streaming length, to estimate the impact on structure formation. This method also works for non-thermal momentum spectra. On the side, we discuss the evidence for dark matter, known problems of the ΛCDM, and the basic production mechanisms in the early universe. Also a comprehensive, analytical and numerical, reduction of necessary equations is presented.
  • Lindroos, Olavi (Helsingin yliopistoUniversity of HelsinkiHelsingfors universitet, 2000)
    Schrödingerin kissan paradoksi on puhuttanut paljon kvanttimekaniikan ymmärtämiseen pyrkiviä tutkijoita. Kissan kohtalo on ollut osasyynä mitä mielikuvituksellisempien ratkaisuyritysten keksimiseen ja kvanttimekaniikan tulkintojen syntymiseen. Kissaa itseäänkin on tulkittu paljon ja väärin, joten tutkimus on hyvä aloittaa tutkimuskohteen määrittelyllä – ensin siis selvitän, mikä on Schrödingerin kissa ja miksi se on nostettu pöydälle. Kolmea tunnetuinta kvanttimekaniikan tulkintatyyppiä (kööpenhaminalainen tulkinta, piilomuuttujateoriat ja multiuniversumiteoriat) vastaan voidaan esittää riittävän voimakasta kritiikkiä, minkä takia ratkaisua kissan paradoksiin on järkevää etsiä muualta. Työssäni tutkin ajatusta, jonka mukaan on mahdollista löytää 'tulkinta' kvanttimekaniikalle kvanttimekaniikan sisältä. Kissan kohtalo voidaan selvittää ratkaisemalla idealisoidun pistemäisen kissan (harmonisen oskillaattorin) tiheysmatriisin liikeyhtälö (mestariyhtälö) ympäristössä, jota mallinnetaan lämpökylvyllä. Havaitaan, että ympäristön vuorovaikutus kissan kanssa aiheuttaa dekoherenssiksi kutsutun ilmiön, mikä pienentää tiheysmatriisin koherenssitermejä (ei diagonaalilla olevia) eksponentiaalisesti ajan funktiona. Dekoherenssiaika (aika, jolloin koherenssitermit ovat pienentyneet e:nteen osaan) riippuu pääasiassa lämpötilasta ja ajasta. Työssäni kävin läpi dekoherenssiajan korkean ja matalan lämpötilan käyttäytymiset sekä pitkän että lyhyen aikaskaalan tapahtumille. Paljastui, että dekoherenssi on erittäin nopea ilmiö, minkä takia sen havaitseminen on todella vaikeaa – itse asiassa dekoherenssin tapahtuminen nähtiin koejärjestelyissä vasta 1990-luvulla. Tämän selvitystyön jälkeen on helppo ymmärtää, miksi kissa ei olekaan superpositiossa vaan klassisen kaltaisessa tilassa. Kissan paradoksi olikin vain näennäinen paradoksi, joka poistui ilmiön riittävän tarkalla ymmärtämisellä – kissaa ei voi täysin eristää ympäristöstään. Itse asiassa kissan atomien välinen vuorovaikutus riittäisi aikaansaamaan dekoherenssin. Dekoherenssi siis aiheuttaa vastaavuuden kvantti- ja klassisen fysiikan välille. Dekoherenssin paremmasta ja täsmällisemmästä ymmärtämisestä on hyötyä niin kvanttimekaaniselle perustutkimukselle, kosmologialle kuin sovelluspuolella mahdollisesti joskus rakennettavan kvanttitietokoneen valmistamisellekin. Jatkotutkimuksina olisi mielenkiintoista selvittää, kuinka dekoherenssiaika riippuu vuorovaikuttavien systeemien määrästä ja erilaisista vuorovaikutustyypeistä. Tässä tutkimuksessahan käytettiin yksinkertaisinta mahdollista kissan ja lämpökylvyn välistä kytkentää.
  • Hartonen, Tuomo (2013)
    Ability to deduce three-dimensional structure of a protein from its one-dimensional amino acid chain is a long-standing challenge in structural biology. Accurate structure prediction has enormous application potential in e.g. drug development and design of novel enzymes. In past this problem has been studied experimentally (X-ray crystallography, nuclear magnetic resonance imaging) and computationally by simulating molecular dynamics of protein folding. However, the latter requires enormous computing resources and the former is expensive and time-consuming. Direct contact analysis (DCA) is an inference method relying on direct correlations measured from multiple sequence alignments (MSA) of protein families to predict contacts between amino acids in the three-dimensional structure of a protein. It solves the 21-state inverse Potts problem of statistical physics, i.e. given the correlations, what are the interactions between the amino acids of a protein. The current state of the art in the DCA approach is the plmDCA-algorithm relying on pseudolikelihood maximization. In this study the performance of the parallelised asymmetric plmDCA-algorithm is tested on a diverse set of more than 100 protein families. It is seen that generally for MSA's with more than approximately 2000 sequences plmDCA is able to predict more than half of the 100 top-scoring contacts correctly with the prediction accuracy increasing almost linearly as a function of the number of sequences. Parallelisation of plmDCA is also observed to make the algorithm tens of times (depending on the number of CPU cores used) faster than the previously described serial plmDCA. Extensions to Potts model taking into account the differences in distributions of gaps and amino acids in MSA's are investigated. An extension incorporating the position-dependant frequencies of gaps of length one to Potts model is found to increase the prediction accuracy for short sequences. Further and more extensive studies are however needed to discover the full potential of this approach.
  • Tähtinen, Sara (2014)
    Magnetic reconnection is a process occurring in, e.g., space plasmas, that allows rapid changes of magnetic field topology and converts magnetic energy to thermal and non-thermal plasma energy. Especially solar flares are good examples of explosive magnetic energy release caused by magnetic reconnection, and it has been estimated that 50% of the total released energy is converted to the kinetic energy of charged particles. In spite of being such an important process in astrophysical phenomena, the theory and the mechanisms behind magnetic reconnection are still poorly understood. In this thesis, the acceleration of electrons in a two-and-half dimensional magnetic reconnection region with solar flare plasma conditions is studied using numerical modeling. The behavior of electrons are determined by calculating the trajectories of all particles inside a simulation box. The equations of motion are solved by using a particle mover called Boris method. The aim of this work is to better understand the acceleration of non-thermal electrons, and, for example, to explain how the inflow speed affects the final energy of the particles, what part of the reconnection area the most energetic electrons come from and how the scattering frequencies changes the energy spectra of the electrons. The focus of this thesis lies in numerical modeling, but all the relevant physics behind this subject are also briefly explained. First the basics of plasma physics are introduced, and leading models of magnetic reconnection are presented. Then the simulation setup and reasonable values for simulation parameters are defined and results of the simulations are discussed. Based on these, conclusions are drawn.
  • Sillanpää, Ilkka (Helsingin yliopistoUniversity of HelsinkiHelsingfors universitet, 2002)
    The one-dimensional method of characteristics is a forward method for determining ionospheric currents from electric and magnetic field measurements. In this work the applicability of the method was studied with respect to polar electrojet and shear flow events as these are the predominant ionospheric current situations and are often one-dimensional, the fields thus having only dependence on the latitude. In this work the characteristic equations are derived from Maxwell's equations and Ohm's law. A program was developed with an algorithm applying the one-dimensional method of characteristics to ionospheric electric field measurements by the STARE radars and ground- based magnetic field measurements by the IMAGE magnetometer network. The magnetic field was upward continued to the ionospheric horizontal current altitude (100km). The applicability of the one-dimensional method of characteristics was shown by analyzing the results from three electric current events. The length of these events varied between 10 and 40 minutes and the study area was limited to STARE and IMAGE measurement area over Scandinavia and part of the Arctic Ocean. The results were accurate and relatively detailed and gave insight to e.g. the origin of the features of the field-aligned currents. The estimated ratio of the Hall and Pedersen conductances, or the alpha parameter, is needed in the method. It was shown that the alpha dependence follows the theoretical predictions, and thus the Hall conductance and the East-West component of the horizontal currents (the Hall current, that dominated the horizontal currents) have practically no dependence on alpha. Also the general features of the conductance and current profiles were not dependent of alpha. Field-aligned current (FAC) results obtained during one of the events were compared with concurrent Cluster satellite measurements at a high altitude orbit above the area of study. Two maxima and a minimum of FAC occurred simultaneously in the results with very comparable numerical values after the mapping down of the results from the satellite. The one-dimensional method of characteristics was found very successful in determining ionospheric conductances and currents in detail from ionospheric electric and magnetic field measurements when the assumption of the one-dimensionality of the event is valid. It seems quite feasible to develop the algorithm for application of the method during longer time periods, where as here only singular events were studied.
  • Dahl, Jani (2018)
    At the end of the inflationary epoch, about 10^(−12) seconds after the Big Bang singularity, the universe was filled with plasma consisting of quarks and gluons. At some stage the cooling of the universe could have led to the occurrence of first-order cosmological phase transitions that proceed by nucleation and expansion of bubbles all over the primordial plasma. Cosmological turbulence is generated as a consequence of bubble collisions and acts as a source of primordial gravitational waves. The purpose of this thesis is to provide an overview of cosmological turbulence as well as the corresponding gravitational wave production, and compile some of the results obtained to this day. We also touch on the onset of cosmological turbulence by analysing shock formation. In the one-dimensional case considering only right-moving waves, the result is Burgers’ equation. The development of a power spectrum with random initial conditions under Burgers’ equation is calculated numerically using the Euler method with sufficiently low step sizes. Both in the viscid and inviscid cases, the result is the presence of a −8/3 power law in the inertial range at the time of shock formation.
  • Ikonen, Joni (2016)
    Quantum computers store and manipulate information in individual quantized energy levels. These devices, not yet realized in their full potential, have the ability to perform certain computational tasks more efficiently than any classical computer. One possible way to implement a quantum computer is to use superconducting circuits controlled by single-mode electromagnetic fields. These circuits constitute the physical quantum bits, or qubits, that are used to store quantum information. A complete, fault-tolerant quantum computer potentially consists of at least millions of physical qubits which are grouped to form fault-tolerant logical qubits. Controlling each physical qubit individually requires a great amount of energy, and hence a future challenge is to reduce the energy consumption in qubit control while maintaining the high precision. In this thesis, we derive a fundamental upper bound for the gate fidelity of a single-qubit not gate implemented with a single resonant driving pulse. It is shown that the upper bound approaches unity inversely proportionally to the increasing mean photon number of the pulse. Furthermore, we find that the upper bound is achieved with an optimal superposition of squeezed states. The typically employed coherent state produces twice as high gate error as the corresponding optimal state. In addition, we present and numerically study a correction protocol that allows using the same drive state for multiple qubit operations. This sustained state is refreshed by sequentially coupling ancillary qubits to it, effectively resetting it and removing entanglement with the previously operated qubits. Thus our protocol allows using the same drive state to implement not gates for different qubits indefinitely, and hence provides a possible route to energy-efficient large-scale quantum computing.