Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Piispa, Aleksi (2022)
    The nature of dense matter is one of the greatest mysteries in high energy physics. For example, we do not know how QCD matter behaves in neutron star densities as there the matter is strongly coupled. Thus auxiliary methods have to be applied. One of these methods is the AdS/CFT-correspondence. This maps the strongly coupled field theory to weakly coupled gravity theory. The most well known example of this correspondence is the duality between N = 4 Super Yang-Mills and type IIB supergravity in AdS 5 × S 5 . This duality at finite temperature and chemical potential is the one we invoke in our study. It has been hypothesized that the dense matter would be in a color superconducting phase, where pairs of quarks form a condensate. This has natural interpretation in the gravity theory. The AdS 5 × S 5 geometry is sourced by stack of N coincident D3-branes. This N corresponds to the gauge group SU (N ) of N = 4 SYM. Then to study spontaneous breaking of this gauge group, one studies systems where D3-branes have separated from the stack. In this work we present two methods of studying the possibility of separating these branes from the stack. First we present an effective potential for a probe brane, which covers the dynamics of a single D3-brane in the bulk. We do this by using the action principle. Then we construct an effective potential for a shell constructed from multiple branes. We do this by using the Israel junction conditions. Single brane in the bulk corresponds to SU (N ) → SU (N − 1) × U (1) symmetry breaking and a shell of k-branes corresponds to SU (N ) → SU (N − k) × U (1) k symmetry breaking. Similar spontaneous breaking of the gauge group happens in QCD when we transition to a CSC-phase and hence these phases are called color superconducting. We find that for sufficiently high chemical potential the system is susceptible to single brane nucleation. The phase with higher breaking of the gauge group, which corresponds to having shell made out of branes in the bulk, is metastable. This implies that we were able to construct CSC-phases of N = 4 SYM, however, the exact details of the phase diagram structure is left for future research.
  • Liye, He (2014)
    In this thesis, we concentrate on the problem of modelling real document collections, especially sequential document collections. The goal is to discover important hidden topics in the collection automatically by statistical modelling of its content. For the sequential document collections, we want to also capture how the topics change over time. To date, several computational tools such as latent dirichlet allocation (LDA) have been developed for modelling document collections. In this thesis, we develop new topic models for modelling the dynamic characteristics of a sequential document collection such as the news archives. We are, for example, interested in splitting the topics into long-term topics such as 'Eurozone crisis' that are discussed over years, and short-term topics such as 'Winter Olympics in 2014' that are only popular for several weeks. We first review the popular models for detecting the hidden topics and their evolution, and then propose two new approaches to detect these two kinds of topics. To provide real world data for the evaluation of our new approaches, we additionally design a pipeline for constructing sequential document collections through collecting documents from the Web. To investigate the performance of our new approaches from different aspects, we conduct qualitative and quantitative experiments on two different kinds of datasets respectively: news documents collected by the pipeline and 17 years' documents from the Neural Information Processing Systems (NIPS) conferences. The qualitative experiments aim at evaluating the quality of the discovered topics, whereas the quantitative experiments concern about their ability to predict new words from the unseen documents.
  • Korpi, Antti (2016)
    Ring opening polymerization and click reaction was used to synthesize thermo-responsive glyco-block copolymers consisting of a polyether block with pendant α-D-mannose groups and random copolymer blocks of poly(glycidyl methyl ether)-poly(epoxyhexane). The thermo-responsive block was synthesized as a random copolymer to decrease the phase transition temperature to usable region. Temperature-responsiveness would enable the polymers to switch between dissolved and aggregated states. Such glycopolymers would be interesting candidates for studying carbohydrate-lectin interactions and drug delivery properties. The synthesized polymers were analyzed using nuclear magnetic resonance and Fourier-transform infrared spectroscopy, turbidimetry and differential scanning calorimetry. Both glycopolymers and thermo-responsive copolymers were synthesized. The latter showed good control over the polymerization, leading to clickable azide functionality and desired ratios of monomers in the copolymers. Altering the ratios of glycidyl methyl ether and epoxyhexane in the feed led to variations in the cloud points and glass transition temperatures of the copolymers. The synthesis of glycopolymers proved difficult and could not be initiated using clickable propargyl alcohol. Also, no effective way to purify the glycopolymers initiated using bezyl alcohol was found. Combination of the glycopolymers and thermo-responsive copolymers was attempted using click reaction. A triazole signal was detected using nuclear magnetic resonance spectroscopy suggesting the reaction was successful. However, further studies are required to confirm this.
  • Varadharajan, Divya (2014)
    Solutions of thermoresponsive polymers exhibit a drastic and discontinuous change in their properties with temperature. A thermoresponsive polymer that is soluble at low temperatures but undergoes reversible phase transition in a solvent with rising temperature resulting in precipitation or cloud formation is said to exhibit Lower Critical Solution Temperature (LCST)-type behaviour. On the other hand, polymers that exhibit Upper Critical Solution Temperature (UCST)-type behaviour are soluble in water at temperatures above UCST and become reversible insoluble when temperature decreases below upper critical solution temperature. This work deals with the synthesis of novel upper critical solution temperature block copolymers and the effect of pH and electrolyte on their cloud point temperatures. The polymers poly(N-acryloylglycinamide) (PNAGA), poly(ethyleneoxide)-b-poly(N-acryloylglycinamide) (PEO-b-PNAGA), poly(N-isopropyl acrylamide)-b-poly(N-acryloylglycinamide) (PNIPAAm-b-PNAGA) and poly(ethyleneoxide)-b-poly(N-acryloylglycinamide)-b-poly(N-isopropyl acrylamide) (PEO-b-PNAGA-b-PNIPAAm) were synthesized by Reversible Addition-Fragmentation chain-Transfer polymerization in dimethyl sulphoxide. PEO-b-PNAGA and PEO-b-PNAGA-b-PNIPAAm exhibited UCST-type behaviour both in pure water (studied by NMR) and 0.1M NaCl solutions (studied by turbidimetry). Poly (ethyleneoxide) (PEO) block played an important role in enhancing the UCST behaviour of PNAGA by improving the polymers solubility. Yet, higher cloud points in 0.1M NaCl were observed than for PNAGA due to the presence of hydrophobic dodecyl end group. Measuring the particle size between 10-50 °C by dynamic light scattering proved that the polymers phase separated on cooling below the UCST. PEO-b-PNAGA-b-PNIPAAm showed multiresponsive behaviour both in pure water and electrolyte solution exhibiting both LCST and UCST. Change in pH had a dramatic effect on the UCST of PNAGA owing to the carboxylic acid end group shifting the cloud points to higher temperatures with increase in pH. The cloud points were lower for the PNAGA block copolymers in pH 4 buffer solutions compared to that of PNAGA itself due to high solubility of poly (ethylene oxide) block in aqueous solutions.
  • Hirvonen, Joonas (2020)
    We apply the modern effective field theory framework to study the nucleation rate in high-temperature first-order phase transitions. With this framework, an effective description for the critical bubble can be constructed, and the exponentially large contributions to the nucleation rate can then be computed from the effective description. The results can be used to make more accurate predictions relating to cosmological first-order phase transitions, for example, the gravitational wave spectrum from a transition, which is important for the planned experiment LISA. We start by reviewing a nucleation rate calculation for a classical scalar field to understand, how the critical bubble arises, via a saddle-point approximation, as the central object of the nucleation rate calculation. We then focus on the statistical part of the nucleation rate coming from the Boltzmann suppression of nucleating bubbles. This is done by the creation of an effective field theory from a thermal field theory that can describe the critical bubble. We give an example calculation with the renormalizable model of two $\mathbb{Z}_2$-symmetric scalar fields. The critical bubbles of the model and their Boltzmann suppression are studied numerically, for which we further develop a recently proposed method.
  • Al Mussa, Wafa (2022)
    Nukleiinihapot ovat luonnollisia yksi- tai kaksisäikeisiä polymeerejä, jotka koostuvat deoksiribo- tai ribonukleosideistä linkitettynä toisiinsa fosfodiesterisidoksella. Kun tällaisia ketjuja valmistetaan kemiallisin menetelmin fosfaattiryhmä olisi aktivoitava tietyllä tavalla ja funktionaaliset ryhmät, jotka eivät osallistu reaktioon, olisi suojattava väliaikaisesti tai pysyvästi. Kiinnostus nukleiinihappokemiaan johtuu synteettisten oligomeerien ja niiden analogien kasvavasta tarpeesta välttämättöminä tutkimusvälineinä molekyylibiologiassa ja lääketieteessä. Tutkielman kirjallisessa osassa esitettiin erilaisia menetelmiä lyhyiden oligonukleotidien kemialliselle synteesille. Fosfodiesterisidosten muodostuminen tapahtuu yleensä joko fosfotriesterin tai fosfiitti-triesterin välituotteiden avulla. P(III)-välituotteiden suuremman reaktiivisuuden vuoksi fosfiitti-triesterimenetelmä ja erityisesti fosforamidiittimenetelmä ovat herättäneet huomiota. Oligonukleotidisynteesin lähtöaineiksi on ehdotettu useita erilaisia nukleosidi-fosforamidiitteja, kun on etsitty tasapainoa stabiilisuuden ja reaktiivisuuden välillä. Tämän vuoksi H-fosfonaattimenetelmää, jossa yhdistetään sekä fosfotriesteri- että fosfiitti-triesterimenetelmien edut ja lisäksi fosfodiesterimenetelmän edut (esim. fosforikeskuksen suojaavan ryhmän puute), voidaan käyttää vaihtoehtona fosforamidiittimenetelmälle erityisesti RNA:n sekä hapoille labiilien oligonukleotidianalogien synteesissä. Kaikilla menetelmillä on kuitenkin hyvät puolet sekä huonot puolet, joten ei olisi vielä olemassa yleisesti sovellettavaa ja tehokasta synteesimenetelmää, vaan ne sopivat eri tapauksiin. Esimerkiksi suuren mittakaavan synteesin tapauksessa tulee ottaa huomioon reaktioaika, reagenssi, puhdistusmenetelmä ja muut resurssit. Lisäksi lähestymistavat ovat yleensä joko erittäin työläitä etenkin lopputuotteen puhdistuksessa tai monivaiheisia sekvenssejä, joiden kokonaistuotto on alhainen. Kokeellisessa osassa valmistettiin kolmenlaisina Brønsted happokatalyytteinä pentakarboksisyklopentadieenit (PCCP). Tutkittiin mahdollisuutta käyttää PCCP-johdonnaisia regioselektiiviseen nukleosidin 5’-hydroksyyliryhmän suojaamiseen asetaaliryhmällä. Menetelmällä onnistuttiin valmistamaan tyydyttävä määrä 5’-O-asetaalisuojattua tymidiiniä. Menetelmä vaikuttaa lupaavalta pienellä jatkokehityksellä.
  • Mustonen, Sampo (2013)
    Tässä työssä esitellään ja todistetaan nullstellensatz, eli Hilbertin nollajoukkolause. Todistuksessa oletetaan tunnetuksi kurssin 'Algebra I' asiat. Nullstellensatz on algebran peruslauseen moniulotteinen yleistys. Se antaa Hilbertin vastaavuudeksi kutsutun bijektiivisen vastaavuuden varistojen ja radikaalien ideaalien välille. Monet algebrallisen geometrian keskeiset tulokset perustuvat nullstellensatziin. Ennen nullstellensatzin todistamista, tässä työssä esitellään hieman ideaaleihin ja varistoihin liittyvää teoriaa. Lisäksi tässä työssä todistetaan Noetherin renkaisiin ja moduleihin liittyviä lauseita, joita tarvitaan nullstellensatzin todistamiseen. Lopussa todistetaan vielä nullstellensatzin seurauslauseita.
  • Kempf, Yann (2012)
    Vlasiator is a new massively parallel hybrid-Vlasov simulation code being developed at the Finnish Meteorological Institute with the purpose of building new global magnetospheric model going beyond magnetohydrodynamics (MHD). It solves Vlasov's equation for the ion distribution function in the full six-dimensional phase space and describes the electrons as a massless charge neutralising fluid using the MHD equations, thus including ion kinetic effects. The Vlasov equation solver is based on a second-order, three-dimensional finite volume wave-propagation algorithm, making use of Strang splitting to separate translation in space from acceleration in velocity space. The electromagnetic fields are obtained through a second-order, finite volume upwind constrained transport method which conserves the divergence of the magnetic by construction. This work presents the numerical and physical validation tests developed and/or run by the author for Vlasiator, without however covering the technical aspects pertaining to implementation or parallelisation. The numerical quality of the solvers is being assessed for their isotropy, their conservation of div B = 0 and their order of accuracy in space and time. The physical validation tests include an assessment of the diffusive properties of the Vlasov solver, a brief discussion of results obtained from the Riemann problem displaying kinetic effects in the shock solution and finally dispersion plots for quasiperpendicular and quasiparallel wave modes are presented and discussed. The conclusions are that Vlasiator performs well and in line with the expected characteristics of the methods implemented, provided the resolution is good enough. In space, ion kinetic scales should be resolved for kinetic effects going beyond an MHD description to emerge. In velocity space the resolution should yield a smooth discretisation of the ion distribution function, otherwise spurious non-physical artefacts can crop up in the results. The higher-order correction terms included in the solvers ensure good orders of accuracy even for discontinuous solutions, the conservation of div B = 0 is provided up to floating-point accuracy and dispersion plots match remarkably well analytic solutions.
  • Yrjölä, Juhana Antero (Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2007)
    The module of a quadrilateral is a positive real number which divides quadrilaterals into conformal equivalence classes. This is an introductory text to the module of a quadrilateral with some historical background and some numerical aspects. This work discusses the following topics: 1. Preliminaries 2. The module of a quadrilateral 3. The Schwarz-Christoffel Mapping 4. Symmetry properties of the module 5. Computational results 6. Other numerical methods Appendices include: Numerical evaluation of the elliptic integrals of the first kind. Matlab programs and scripts and possible topics for future research. Numerical results section covers additive quadrilaterals and the module of a quadrilateral under the movement of one of its vertex.
  • Sanders, Julia (2022)
    In this thesis, we demonstrate the use of machine learning in numerically solving both linear and non-linear parabolic partial differential equations. By using deep learning, rather than more traditional, established numerical methods (for example, Monte Carlo sampling) to calculate numeric solutions to such problems, we can tackle even very high dimensional problems, potentially overcoming the curse of dimensionality. This happens when the computational complexity of a problem grows exponentially with the number of dimensions. In Chapter 1, we describe the derivation of the computational problem needed to apply the deep learning method in the case of the linear Kolmogorov PDE. We start with an introduction to a few core concepts in Stochastic Analysis, particularly Stochastic Differential Equations, and define the Kolmogorov Backward Equation. We describe how the Feynman-Kac theorem means that the solution to the linear Kolmogorov PDE is a conditional expectation, and therefore how we can turn the numerical approximation of solving such a PDE into a minimisation. Chapter 2 discusses the key ideas behind the terminology deep learning; specifically, what a neural network is and how we can apply this to solve the minimisation problem from Chapter 1. We describe the key features of a neural network, the training process, and how parameters can be learned through a gradient descent based optimisation. We summarise the numerical method in Algorithm 1. In Chapter 3, we implement a neural network and train it to solve a 100-dimensional linear Black-Scholes PDE with underlying geometric Brownian motion, and similarly with correlated Brownian motion. We also illustrate an example with a non-linear auxiliary Itô process: the Stochastic Lorenz Equation. We additionally compute a solution to the geometric Brownian motion problem in 1 dimensions, and compare the accuracy of the solution found by the neural network and that found by two other numerical methods: Monte Carlo sampling and finite differences, as well as the solution found using the implicit formula for the solution. For 2-dimensions, the solution of the geometric Brownian motion problem is compared against a solution obtained by Monte Carlo sampling, which shows that the neural network approximation falls within the 99\% confidence interval of the Monte Carlo estimate. We also investigate the impact of the frequency of re-sampling training data and the batch size on the rate of convergence of the neural network. Chapter 4 describes the derivation of the equivalent minimisation problem for solving a Kolmogorov PDE with non-linear coefficients, where we discretise the PDE in time, and derive an approximate Feynman-Kac representation on each time step. Chapter 5 demonstrates the method on an example of a non-linear Black-Scholes PDE and a Hamilton-Jacobi-Bellman equation. The numerical examples are based on the code by Beck et al. in their papers "Solving the Kolmogorov PDE by means of deep learning" and "Deep splitting method for parabolic PDEs", and are written in the Julia programming language, with use of the Flux library for Machine Learning in Julia. The code used to implement the method can be found at https://github.com/julia-sand/pde_approx
  • Liesipohja, Susanna (2014)
    Logaritmisk kapacitet är viktigt inom flera områden av tillämpad matematik och kan ha olika benämningar beroende på forskningsområdet. T.ex. inom talteorin kallas den logaritmiska kapaciteten för transfinit diameter och inom approximering av polynom är den känd som Chebyshevs konstant. Inom potentialteorin definieras den logaritmiska kapaciteten som måttet på storleken av en kompakt mängd i C. Men trots att den logaritmiska kapaciteten är så viktig inom många forskningsområden, så är den ytterst svår att beräkna. Tack vare dess samband till Greens funktioner går det att beräkna den logaritmiska kapaciteten analytiskt för vissa enklare mängder, såsom ellipser och kvadrater, men när det gäller mer komplicerade mängder så kan man endast uppskatta övre och nedre gränser. På grund av detta har det utvecklats flera numeriska metoder för detta syfte. I början av denna avhandling kommer vi att presentera nödvändig bakgrundsinformation för definiering och beräkning av logaritmisk kapacitet. I kapitel 4 presenterar vi definitionen av logaritmisk kapacitet och dess samband till Greens funktioner, samt hur man genom detta samband kan beräkna den logaritmiska kapaciteten analytiskt. Här presenterar vi även några gränser för den logaritmiska kapaciteten, samt definitionen för transfinit diameter och dess samband till den logaritmiska kapaciteten. I kapitel 5 kommer vi att presentera fyra olika numeriska metoder för approximering av logaritmisk kapacitet: Dijkstra-Hochstenbachs metod, Rostands metod, Ransford-Rostands metod, samt hur man kan använda Schwarz-Christoffel avbildningar för beräkning av logaritmisk kapacitet. Vi tillämpar även Rostands metod som ett MATLAB-program.
  • Miettinen, Kari (Helsingin yliopistoUniversity of HelsinkiHelsingfors universitet, 1993)
    This study discusses the methods, algorithms and implementation techniques involved in the computational solution of unconstrained minimization problem : min x ∈ Rn f : Rn −! R Where Rn denotes the n-dimensional Euclidean space. The main goal in this study was to implement an easy-to-use software package running in personal computers for unconstrained minimization of multidimensional functions. This software package includes C language implementations of six minimization methods (listed below), an user-interface for entering each minimization problem, and an interface to a general software system called MathematicaTM which is used for plotting the problem function and the minimization route. The following minimization methods are discussed here : - Parabolic interpolation in one-dimension - Downhill simplex method in multidimensions - Direction set method in multidimensions - Variable metric method in multidimensions - Conjugate gradients method in multidimensions - Modified steepest descent method in multidimensions The first part of this study discusses the theoretical background of the minimization algorithms to be implemented in the software package. The second part introduces the overall design of the minimization software and in greater detail describes the individual software modules, which, as a whole, implement the software package. The third part introduces the techniques for testing the minimization algorithms, describes the set of test problems, and discusses the test results.
  • Tähtinen, Sara (2014)
    Magnetic reconnection is a process occurring in, e.g., space plasmas, that allows rapid changes of magnetic field topology and converts magnetic energy to thermal and non-thermal plasma energy. Especially solar flares are good examples of explosive magnetic energy release caused by magnetic reconnection, and it has been estimated that 50% of the total released energy is converted to the kinetic energy of charged particles. In spite of being such an important process in astrophysical phenomena, the theory and the mechanisms behind magnetic reconnection are still poorly understood. In this thesis, the acceleration of electrons in a two-and-half dimensional magnetic reconnection region with solar flare plasma conditions is studied using numerical modeling. The behavior of electrons are determined by calculating the trajectories of all particles inside a simulation box. The equations of motion are solved by using a particle mover called Boris method. The aim of this work is to better understand the acceleration of non-thermal electrons, and, for example, to explain how the inflow speed affects the final energy of the particles, what part of the reconnection area the most energetic electrons come from and how the scattering frequencies changes the energy spectra of the electrons. The focus of this thesis lies in numerical modeling, but all the relevant physics behind this subject are also briefly explained. First the basics of plasma physics are introduced, and leading models of magnetic reconnection are presented. Then the simulation setup and reasonable values for simulation parameters are defined and results of the simulations are discussed. Based on these, conclusions are drawn.
  • Lammi, Hannu (2015)
    This work explores the lateral spreading of hot, thick, Paleoproterozoic crust via a series of 2D thermomechanical numerical models based on two geometrical a priori models of the thickened crust: plateau and plateau margin. High Paleoproterozoic radiogenic heat production is assumed. The material viscosity is temperature-dependent following the Arrhenius law. The experiments use two sets of rheological parameters for the crust: dry (granite/felsic granulite/mafic granulite) and wet (granite/diorite/mafic granulite). The results of the modeling are compared to seismic reflection sections and surface geological observations from the Paleoproterozoic Svecofennian orogen. Numerical modelling is performed with Ellipsis, a particle-in-cell finite element code suitable for 2D thermo-mechanical modelling of lithospheric deformation. It uses Lagrangian particles for tracking material interfaces and histories, which allow recording of material P-T-t paths. Plateau-models are based on a 480 km long section of 65 km-thick three-layer plateau crust. In the plateau margin-models, a transition from 65 km thick plateau to 40 km thick foreland is imposed in the middle of the model. The models are extended symmetrically from both ends with slow (1.9 mm/a) and fast (19 mm/a) velocities. Gravitational collapse is simulated with an additional set of fixed boundary plateau margin models. The models are studying the effect of free moving boundaries on the crustal structure and the conditions for mid-crustal flow. Strong mid-crustal channel flow is seen in plateau margin models with dry rheology and slow extension or with fixed boundaries. With fast extension or wet rheology channel flow grows weaker/diminishes. In models with slow extension or fixed boundaries, partial melting controls the style of deformation in the middle crust. Vertical movement of the partially molten material destroys lateral flow structures in plateau regions. According to P-T-t paths, the model materials do not experience high enough temperatures to match HT-LP metamorphic conditions typical for Svecofennian orogenic rocks. Metamorphic conditions in the dry rheology models have counterparts in the LT-LP (>650 °C at ≤600 MPa) amphibolite facies rocks of the Pielavesi area. Plateau margin models with dry rheology and slow extension or fixed boundaries developed mid-crustal channel flow, thinning of middle crust, exhumation of mid-crustal domes and smooth Moho, all of which are found in crustal scale reflection sections. Results of this work suggest plateau margin architecture prior to extension that took place at slow velocities or through purely gravitational collapse, although peak temperature of Svecofennian HT-LP metamorphism was not attained.
  • Wang, Yijun (2020)
    The Southern Andes is an important region to study strain partitioning behavior due to the variable nature of its subduction geometry and continental mechanical properties. Along the plate margin between the Nazca plate and the South American plate, the strain partitioning behavior varies from north to south, while the plate convergence vector shows little change. The study area, the LOFZ region, lies between 38⁰S to 46⁰S in the Southern Andes at around 100 km east of the trench. It has been characterized as an area bounded by margin-parallel strike-slip faults that creates a forearc sliver, the Chiloe block. It is also located on top of an active volcanic zone, the Southern Volcanic Zone (SVZ). This area is notably different from the Pampean flat-slab segment directly to the north of it (between latitude 28⁰ S and 33⁰ S), where volcanic activity is absent, and slip seems to be accommodated completely by oblique subduction. Seismicity in central LOFZ is spatially correlated with NE trending margin-oblique faults that are similar to the structure of SC-like kinematics described by Hippertt (1999). The margin-oblique faults and rhomb-shaped domains that accommodate strain have also been captured in analog experiments by Eisermann et al. (2018) and Eisermann relates the change in GPS velocity at the northern end of LOFZ to a decrease in crustal strength southward possibly caused by the change in dip angle. This project uses DOUAR (Braun et al. 2008), a numerical modelling software, to explore the formation of the complex fault system in the LOFZ in relation to strain partitioning in the Southern Andes. We implement the numerical versions of the analog models from Eisermann et al. (2018), called the MultiBox and NatureBox models to test the possibility to reproduce analog modelling results with numerical models. We also create simplified models of the LOFZ, the Natural System models, to compare the model displacement field with deformation pattern in the area. Our numerical model results in general replicate the findings from MultiBox experiment of Eisermann et al. (2018). We observe the formation of NW trending margin-oblique faulting in the central deformation zone, which creates rhombshaped blocks together with the margin-parallel faults. More strain is accommodated in the stronger part of the model, where the strain is more distributed across the area or prefers to settle on a few larger bounding faults, whereas in the weaker part of the model, the strain tends to localize on more smaller faults. The margin-oblique faults and rhomb-shaped domains accommodating strain is not present in the Natural System models with and without a strength difference along strike. This brings the question about the formation of the complex fault system in both the analog models and our numerical versions of them and hypothesis other than a strength gradient could be tested in the future.
  • Hällfors, Jaakko (2023)
    Topological defects are some of the more common phenomena of many extensions of the standard model of particle physics. In some sense, defects are a consequence of an unresolvable misalignment between different regions of the system, much like cracks in ice or kinks in an antiquated telephone cord. In our context, they present themselves as localised inhomogeneities of the fundamental fields, emerging at the boundaries of the misaligned regions at the cost of, potentially massive, trapped energy. Should the cosmological variety exist in nature, they are hypothesised to emerge from some currently unknown cosmological phase transition, leaving their characteristic mark on the evolution of the nascent universe. As of date, so called cosmic strings are perhaps the most promising type of cosmic defect, at least with respect to their observational prospects. Cosmic strings, as the name suggest, are linelike topological defects; exceedingly thin, yet highly energetic. Given the advent of gravitational wave astronomy, a substantial amount of research is devoted to detailed and expensive real-time computer simulations of various cosmic string models in hopes of extracting their effects on the gravitational wave background. In this thesis we discuss the Abelian-Higgs model, a toy model of a gauge theory of a complex scalar field and a real vector field. Through a choice of a symmetry-breaking scalar potential, this model permits line defects, so called local strings. We discuss some generalities of classical field theory as well as those of the interesting mathematical theory of topological defects. We apply these to our model and present the necessary numerical methods for writing our own cosmic string simulation. We use the newly written simulation to reproduce a number of contemporary results on the scaling properties of the string networks and present some preliminary results from a less investigated region of the model parameter space, attempting to compare the effects of different types of string-string interactions. Furthermore, preliminary results are presented on the thermodynamic evolution of the system and the effects a common computational trick, comoving string width, are discussed with respect to the evolution of the equation of state.
  • Kodikara, Naveen Timothy (Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2011)
    We present the results of a numerical simulation of energetic proton propagation in a model of Hermean magnetic field. The analysis is in connection with the scientific work of the particle detector of the Solar Intensity X-ray and particle Spectrometer (SIXS) on-board BepiColombo, which is to be launched in 2014. In this work we have used a test particle simulation model developed by Vainio and Sandroos of the University of Helsinki. Mercury's environment is a complex system, resulting from the interaction between the solar wind, magnetosphere, exosphere and surface. The mission BepiColombo, a joint project of the European Space Agency (ESA) and the Japan Aerospace Exploration Agency (JAXA) will be equipped with scientific instruments for detailed observation of Mercury's magnetic field and the magnetosphere. SIXS will investigate the direct solar X-rays and energetic proton and electron fluxes in the planet's environment. The flux distribution of energetic protons on the surface of Mercury and in the magnetopause is studied at different locations of the simulated BepiColombo/MPO like orbit around the planet. Two primary simulations were carried out: first, an orbit with periherm 2840km and apoherm 3940km and second, a surface skimming circular orbit. Motivation for circular orbit was to understand the distribution at the magnetopause of particles hitting the surface of the planet as viewed from above the surface on the noon-midnight meridian. The response of several energy levels of particles to different locations in the static magnetic field is studied by analysing latitude-longitude maps of flux distribution and asymptotic directions through various widths of instrument view cone. SIXS experiment uses five such view cones. The view cone, which is approximately in the anti-nadir direction, was targeted in the simulation. Data were compiled at locations with a 30 degree separation (with respect to latitudinal plane) along the orbits. Interpretation of the data was realised though a few MATLAB® subroutines and functions. The results suggest that SIXS is capable of providing data from targeted areas to perform relevant and specific tasks for MIXS (Mercury Imaging X-ray Spectrometer) as expected and planned. It was found out that in order to accurately describe certain propagation behaviours we need to improve the simulation to have the ability to map the path of a specific particle. Concise summaries of the planet Mercury, BepiColombo mission and Solar Energetic Particles have been provided along with supporting appendices. Also the basic principles behind important numerical techniques used have been introduced.
  • Ogbeide, Ilona (2015)
    Multiple changes have occurred in the landscape of the cities and brought new challenges to the use of land in the urban areas. Urban areas have become more compact, population has grown and whereas the amount of public spaces have decreased. Many public spaces have turned into quasi-public or even private spaces. Urban planners and decision-makers must take into account the needs of even more different actors than before. When the number of public spaces has decreased, some of the groups using it, have become more easily excluded from it. A battle of the right to be and use the public spaces are fought between different groups. Adolescents are one of these groups that often tend to be excluded, even though public spaces are significant for their free-time. They also lack places where to be or meet each other. In addition, knowledge about the meanings and qualities of the places where adolescents spend time is scarce. This master's thesis is a case study of Vuosaari suburb in the Eastern Helsinki. The aim is to explore the public and quasi-public spaces where the local adolescents spend their time. Furthermore, qualities and meanings as well as adolescents needs in those spaces are studied. The data used in this study was collected through interviews and place mapping. In place mapping adolescents could mark the places where they hang out on the map and describe them in written form. The study is based on the idea of subjective construction of space and place perception, which are also affected by cultural and environmental factors. Adolescent's perception of public and quasi-public spaces is explored through theory of affordances. Affordance implies to the possible threats and possibilities that one might find from the surrounding environment. The public and quasi-public spaces that are used by adolescents in Vuosaari are moreover classified into loose spaces, spaces of doing and tight spaces. Loose spaces are free from the adult control whereas in the tight spaces and spaces of doing adolescents are under adult surveillance. Tight spaces are aimed to certain kind of doing and it is not possible to differ from the activities designated beforehand. In the spaces of doing it is possible to perform different activities more freely. However, challenging the norms of those spaces leads to sanctions. The study found that adolescents use different kind of public and quasi-public spaces. How they use and value them is dependent on their needs and preferences. The findings suggest that socializing and activities play a major role for adolescents in the public and quasi-public spaces. Also accessibility and closeness of home are important factors for adolescents when choosing the hangout places. Ambiguity characterizes the spaces adolescents prefer. They are sometimes used to expose oneself in front of others, but on the other hand adolescents seek places where they can avoid adult control. Therefore, especially loose spaces, that offer possibility to avoid adult control, proved to be important for adolescents. Additionally, social and functional affordances were valued as well as spaces where those affordances could be found. Adolescents should not however be bundled into one category, since they have different needs in public and quasi-public spaces. Their needs are dependent on factors such as gender and personal preferences. Hence, urban planners and decision-makers ought to offer as diverse public and quasi-public spaces as possible. Furthermore, adolescent's use of public and quasi-public spaces should be accepted.
  • Niemi, Liisa (2023)
    Pro gradu -tutkielmassani perehdyn ilmapiirikokemukseen peruskoulun 8-luokan oppilaiden keskuudessa. Ilmapiirin parantaminen voidaan nähdä yhtenä koulujen merkittävimmistä haasteista, ja aiempien tutkimusten perusteella ilmapiirin kehittämiselle on Suomessa tarvetta. Tutkimuksen tavoitteena on selvittää, millaisia kokemuksia oppilailla on heidän luokkansa ilmapiiristä, mitkä tekijät ovat keskeisiä ilmapiirin parantamisessa ja heikentämisessä, sekä millaisilla tekijöillä avoimen ilmapiirin muodostumista voitaisiin edesauttaa. Tarkoituksenani on täten nuorten omia kokemuksia ja ajatuksia kuunnellen lisätä ymmärrystä koululuokkien ilmapiirin merkityksestä heidän hyvinvoinnilleen. Tutkimuksen teoreettinen viitekehys perustuu kahteen näkökulmaan: sosiaalis-kognitiiviseen teoriaan ilmapiiriin vaikuttavista sisäisistä ja ulkoisista tekijöistä, sekä maantieteelliseen teoriaan, jossa oppimisympäristön fyysiset ominaisuudet nähdään ilmapiirin sosiaalis-kognitiivisen teorian tekijöihin vaikuttajana. Tutkimusstrategiana tutkimuksessa käytetään osallistavaa tutkimusstrategiaa, ja tutkimusmenetelmänä käytän kvalitatiivisia ja kvantitatiivisia menetelmiä yhdistävää tutkimusta. Kvalitatiivisena menetelmänä tutkimuksessa on kaksivaiheinen oppituntien havainnointiosuus ja puolistrukturoitu teemahaastattelu. Kvantitatiivisena menetelmänä tutkimuksessa ovat oppilaille jaettavat kyselylomakkeet. Havaintoaineiston, haastattelut ja kyselylomakkeet analysoin laadullisella aineistolähtöisellä sisällönanalyysillä, minkä lisäksi hyödynsin kyselylomakeaineiston analyysissä tilastollisten tunnuslukujen kuvaamista. Tutkimuksen tulokset osoittivat, että luokkatila toimi ilmapiiriä tukevana tekijänä ja keskimääräinen kokemus luokan ilmapiiristä koetiin hyväksi. Tulokset kuitenkin osoittivat myös, ettei ilmapiiri ole välttämättä sitä, miltä se ulospäin näyttää, sillä ilmapiirikokemus vaihteli merkittävästi yksittäisten oppilaiden läsnäolosta riippuen. Merkittävimmäksi yksittäiseksi ilmapiirin rakentajaksi nousikin yksittäisten oppilaiden ongelmakäyttäytymisen vaikutus. Luokan hyvä ilmapiiri määrittyi tulosten perusteella yhdistelmäksi sosiaaliseen vuorovaikutukseen ja yhteishenkeen perustuvia hyväksyviä ja ystävällisiä suhteita, yhteistä tekemistä sekä oppimisvirettä edistävää työrauhaa ja järjestyksen säilymistä. Tärkeimmiksi ilmapiiriä tukeviksi tekijöiksi nousivat edellä mainittuihin seikkoihin liittyvät keinot, kuten luokan sosiaalisen rakenteen tukeminen yhteisillä aktiviteeteilla ja oppilassuhteita parantamalla, työrauhan ja oppimisvireen varmistaminen tiukemmalla kurilla ja luokkajärjestelyn muokkaamisella, sekä tasa-arvoisuuden tukeminen niin opettajan, kuin myös oppilaiden tasolta. Kokonaisuudessaan tutkimus vahvisti näkemystä siitä, että ilmapiiriin vaikuttavat monet eri tekijät. Jaetussa tilassa se muodostuu vuorovaikutuksessa toisten kanssa, mutta on toisaalta myös jokaisen henkilökohtainen kokemus. Se riippuu sekä oppilaiden että luokan sisäisistä ja ulkoisista tekijöistä, eikä sen ymmärtämisessä riitä ainoastaan tiettyjen tekijöiden huomioiminen. Hyvän ilmapiirin saavuttamiseksi eivät riitä vain luokkatilaan, sosiaaliseen rakenteeseen tai opettaja-oppilassuhteeseen vaikuttaminen, vaan hyvässä ilmapiirissä tulee huomioida myös oppilaiden yksilölliset piirteet, tarpeet ja oikeudet. Tässä tarvitaan kokonaisvaltaisia ratkaisuja, joiden kehittämisessä ja toteutuksessa tarvitaan ymmärrystä ja yhteistyötä niin oppilaiden ja vanhempien, kuin myös koulujen ja poliittisten päättäjien puolelta.
  • Stenman, Annaleena (2017)
    As a consequence of industrialization, metal concentrations in nature are big enough to have harmful effects on humans, animals and the environment. On that grounds, EU legislates maximum metal concentrations in several matrices. More sensitive and more selective analytical techniques are required in making sure these norms are being followed. To assess the biological activity and toxicity, along with the total metal concentration, the knowledge on the oxidation state of the metal and speciation is to be signified. In the literature part of the thesis the most used sample pretreatment and analysis techniques in metal determination in the years 2007–2017 are covered. Atomic absorption spectrometry and inductively coupled plasma based techniques are compared in determination of total metal concentration. Liquid chromatography and capillary electrophoresis coupled with mass spectrometry detection are presented as techniques used in speciation analysis. Also, sample pretreatment techniques are presented – especially solid-phase extraction – as well as parameters influencing extraction efficiency and selectivity. New sorbent materials, their synthesis and characterization, and applications are discussed in the last section of the literature part. A few studies regarding metal speciation analysis are also presented. Some future aspects of solid-phase extraction are evaluated in the end of literature part. In the experimental part of the thesis, metals were determined with inductively coupled mass spectrometer (ICP-MS) from boric acid containing waters in Loviisa Nuclear Power Plant. Determination of metals relates directly to the safe operation of the plant. However, the large concentration of boric acid in the process water samples prevents their analysis with ICP-MS. In the experimental part, boric acid concentration was diluted with two sample pretreatment methods: esterification or on-line aerosol dilution. Aerosol dilution was a better approach in process water analysis than esterification. Thus, three analytical methods using aerosol dilution in different boric acid and metal concentrations were optimized. Validation of two of these methods are presented in the end of the experimental part.