Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by department "Institutionen för fysik"

Sort by: Order: Results:

  • Ignatius, Karoliina (2013)
    The ice crystals in the Earth's atmosphere have a considerable impact on cloud optical and radiative properties, such as reflectivity, cloud dynamics, chemical processes and initiation of precipitation and cloud lifetime. Thus, they directly affect the albedo of Earth which in turn is a significant factor concerning the climate change. The first step of ice formation, the phase transition from liquid to solid, is called ice nucleation. Ice has been identified to form in the atmosphere both homogeneously - without the presence of a foreign substance - and heterogeneously, i.e. induced by pre-existing surfaces. Heterogeneous ice nucleation is the primary pathway of ice formation in the atmosphere. Also homogeneous ice nucleation happens at the upper troposphere, but it requires very low temperatures, whereas in heterogeneous freezing the free energy needed for crystallization is lower and freezing can happen at higher temperatures, normally above -37 ◦C. In heterogeneous ice nucleation, there are four freezing mechanism called modes that describe the onset conditions for freezing. They are called deposition, immersion, condensation and contact modes. The seed particles on which ice forms in heterogeneous ice nucleation are called ice nuclei (IN). Typical IN particles include mineral dust, soot, metal, bacteria and other bioaerosols (pollen, fungal spores), humic-like substances, solid ammonium sulphate, organic acids and volcanic ash. This thesis is a literature survey on the current theoretical knowledge on heterogeneous ice nucleation. It has been long known that the classical nucleation theory (CNT), if employed with one constant contact angle does not reproduce the experimental results. This is due to the fundamental assumptions of CNT: the spherical form of the initial ice embryo having properties of the bulk ice crystal; uniform surfaces; equal nucleation probability for each particle; and stochastic freezing behaviour. To solve this challenge, several theoretical and empirical extensions of CNT have been derived: the use of a distribution of contact angles and active sites; integrating individual nucleation rates over measured size spectra of the IN; and using Ice Nuclei Active Surfaces Sites (INAS) density as a normalized metric from different experiments. As a result of this literature survey, three main lines of research on this field can be distinguished: (1) ice nuclei characterisation; (2) theoretical and empirical modelling of the heterogeneous ice nucleation scheme; (3) parameterising heterogeneous ice nucleation for cloud and climate models. These lines are not altogether separate, but intertwined: knowledge of e.g. the surface properties of the IN is essential for deriving theoretical formulations. Parameterisations, on the other hand, are very much needed in order to obtain knowledge about the indirect climatic effect of ice in the atmosphere.
  • Hakala, Jani (2012)
    The most important parameters describing the aerosol particle population are the size, concentration and composition of the aerosol particles. The size and water content of the aerosol particles are dependent of the relative humidity of the ambient air. Hygroscopicity is a measure to describe the water absorption ability of an aerosol particle. Volatility of an aerosol defines how the aerosol particles behave as a function of temperature. A Volatility-Hygroscopicity Tandem Differential Mobility Analyzer (VH-TDMA) is an instrument for size-selected investigation of particle number concentration, volatility, hygroscopicity and the hygroscopicity of the particle core, i.e. what is left of the particle after the volatilization. While knowing these qualities of aerosol particles, one can predict their behavior in different atmospheric conditions. Volatility and hygroscopicity can also be used for indirect analysis of chemical composition. The aim of this study was to build and characterize a VH-TDMA, and report the results of its field deployment at the California Nexus (CalNex) 2010 measurement campaign. The calibration measurements validated that with the VH-TDMA one can obtain accurate volatility and hygroscopicity measurements for particles between 20 nm and 145 nm. The CalNex 2010 results showed that the instrument is capable in field measurements at varying measurement conditions; and valuable data about hygroscopicty, volatility and the mixing state of several types of aerosols were measured. The data obtained was in line with the observations based on the data measured with other instruments.
  • Tuppi, Lauri (2017)
    Nowadays even medium-range (~6 days) forecasts are mostly reliable but occasionally the quality of the forecasts collapses suddenly. During a collapse or a bust, the actual forecast is worse than a ‘forecast’ made by using climatological mean values. In this study sudden collapse of predictability will be investigated by using one example case from April 2011. OpenIFS NWP model and ERA-Interim reanalysis were used as primary tools. 13 deterministic forecasts with the best available initial conditions were run but one forecast initialized on the worst day is particularly concentrated on. One ensemble forecast of five members also initialized on the worst day is also investigated in this study. Output of OpenIFS was compared to ERA-Interim. Previous studies have shown that the reasons for European forecast busts can be found in North America. Therefore, the aim of this study is to determine if the incorrect representation of convection over North America lead to a forecast bust over Europe. Besides the main goal, this study discusses how the errors originating from North American convection lead to a forecast bust in Europe 6 days later and this study will also be looking for cause of the forecast bust from initial conditions of ensemble forecast. In this case the sudden collapse of predictability in Europe is caused by NWP models predicting change of weather regime wrong. Also OpenIFS predicts formation of a blocking high over Northern Europe although there are no signs of blocking in reanalysis. In Northern America, where the source of the error is, forecast of evolution of a cluster of thunderstorms fails so also convective forcing to large scale dynamics fails. The error grows and is transported to Europe by Rossby waves. Although none of the members of the ensemble forecast was able to forecast the weather properly in Europe, so much deviation was obtained in the outcomes that comparison of the initial conditions was meaningful. The most important finding was that deeper trough over the Rocky Mountains improves the forecast in Europe. This study was able to show evidence that misrepresented convection over North America caused the forecast to fail in Europe. Moreover, this study was able to clarify how the errors caused by misrepresented convection evolved and lead to the forecast bust in Europe. The error at the beginning of the forecast in North America grows so fast that it is unlikely that it would be due to model parameterizations but the initial conditions must contain errors. These failed forecast are difficult to avoid completely but the easiest way to reduce them is to improve quality of the observations in the Rocky mountains.
  • Kokkonen, Iiro (2018)
    The Kara Sea is part of the seasonal sea ice zone in the Arctic, where the warming climate is rapidly changing the sea ice regime. The warm Atlantic water transported through the Barents Sea has a strong influence on the ice conditions in the northern Kara Sea. In this thesis, trends and interannual variability in sea ice conditions in the Kara Sea area studied. For this purpose, coupled sea ice-ocean model NEMO-LIM3 and sea ice concentration datasets derived from passive microwave satellite observations (SMMR, SSM/I and SSMIS) are used. Additionally, the model performance is assessed by comparing its output with the observations. The ice coverage examined in regional and local scales shows negative trends in all months in 1978-2015. The interannual variability of the total ice covered fraction increased in winter and spring when the ice regime shifted from full to partial ice cover over the sea. Meanwhile the variability in summer and autumn decreased. The annual ice free time rapidly extended in the area north of Novaya Zemlya where the warm Atlantic water enters the Kara Sea. The mean sea ice thickness, based on the sea ice-ocean model data in 1997-2015, has become thinner in all months. The model is generally in good agreement with the observations, with the exception of the northern Kara Sea where the model underestimated heat advection. The findings confirm that the sea ice conditions in the Kara Sea have changed towards a new regime with shorter and more variable ice seasons.
  • Patomäki, Sofia (2017)
    In a quantum computer, the information carriers, which are bits in ordinary computers, are implemented as devices that exhibit coherent superpositions of physical states and entanglement. Such components, known as quantum bits or qubits, can be realized with various different types of two-state quantum systems. Quantum computers will be built for computational speed, with hoped for applications especially in cryptography and in other tasks where classical computers remain inefficient. Circuit quantum electrodynamics (cQED) is a quantum-computer architecture which employs superconducting electronic components and microwave photon fields as building blocks. Compared to cavity quantum electrodynamics (CQED), where atoms are trapped in physical cavities, cQED is more attractive in that its qubits are tunable and conveniently integrable with the electronics already in use. This architecture has shown some of the most promising qubit designs, despite their coherence times reaching tens of microseconds, are still below the state of the art with spin qubits, which reach milliseconds. Coherence times are historically the most relevant parameters describing the fitness of a qubit, although these days not necessarily the limiting factor. This thesis presents a comprehensive set of theoretical and experimental methods for measuring the characteristic parameters of superconducting qubits. We especially study transmission-line-shunted plasma oscillation qubits, or transmons, and presents experimental results for a single sample. Transmon capacitively couples a superconducting quantum interference device (SQUID) with a coplanar waveguide (CPW) resonator, often with added frequency tunability utilizing an external magnet. The number of superconducting charge carriers tunnelled through a junction in the SQUID are used as qubit degrees of freedom. Readout of the qubit state is carried out by measuring transmission through the CPW. A cryogenic setup is employed with measurement and driving pulses delivered from microwave sources. Steady-state spectroscopy is employed to determine the resonance frequencies of the qubit and the resonator, qubit-resonator coupling constants, and the energy parameters of the qubit. Pulse-modulated measurements are employed to determine the coherence times of the qubit. The related analysis- and simulation programs and scripts are collected togithub.com/patomaki.
  • Koivunen, Niko (2015)
    Flavour violating processes have never been observed for charged leptons, electron, muon and tau. The existence of charged lepton flavour violating (CLFV) processes is however expected, since flavour is violated by all the other fermions of the standard model (SM). In the standard model the neutrinos are massless, which forbids the mixing of neutrino flavour and also the violation of lepton flavour. The zero mass of the neutrinos in the SM is in conflict with the experimentally observed neutrino oscillations. The standard model has to be extended to include massive neutrinos. The easiest way to explain the neutrino mass is to assume that they acquire masses in the same way as the rest of the SM fermions: in the Higgs mechanism. This way however leads to problems with the naturality of the neutrino Yukawa coupling. One of the most popular methods of generating the neutrino mass is the so called seesaw mechanism (type-I). The standard model, extended with the neutrino masses, allows the charged lepton flavour to be violated. This leads to unobservably small transition rates however. Therefore an observation of charged lepton flavour violating process would be a clear evidence of the existence of new physics beyond the standard model and it's trivial extensions. To have hope of ever observing charged lepton flavour violating processes, there must be an extension of the standard model which produces observable, though small, rates for CLFV processes. One of the most popular extensions of the standard model is the so called minimal supersymmetric standard model (MSSM). The neutrinos are massless in the MSSM, as they are in SM, and therefore CLFV processes are forbidden in the MSSM. Luckily the neutrino masses can be generated via seesaw mechanism in the MSSM as well as in the SM. The MSSM contains more potential sources for CLFV processes than the SM. The extra sources are the soft mass parameters of the sleptons. In supersymmetric models the sleptons couple to the leptons through the slepton-lepton-gaugino-vertices. These generate the CLFV processes at the loop-level. Often the off-diagonal soft terms are assumed zero in the MSSM at the input scale, where the supersymmetry breaks. Experiments are done at much lower electroweak scale. The soft SUSY-breaking terms acquire large radiative corrections as they are run from the input scale down to the electroweak scale. Here the seesaw mechanism kicks in. The seesaw mechanism brings with it the off-diagonal neutrino Yukawa coupling matrices. This allows the off-diagonal slepton mass terms to evolve non-zero at the electroweak scale. In this thesis the charged lepton flavour violation is discussed first in the context of the standard model. Then the CLFV processes, l_i → (l_j γ), l_i → (l_j l_k l_l) and l_i ↔ l_j, are studied in the most general way: in the effective theories. Finally the charged lepton flavour violation is studied in the supersymmetric theories in general and more specifically in the minimal supersymmetric standard model extended with the seesaw mechanism (type-I).
  • Kylliäinen, Joonas (2017)
    As the data traffic, as well as the speed demands, increases, the mobile networks require means for economically fulfil these demands. The solution comes from the cloud. In order to move the processing to the cloud, it must be carefully dimensioned to know how much resources each situation requires. This means there must be a way to calculate from the traffic the virtual machines required and the hardware resources the virtual machines need, when the cloud infrastructure used is OpenStack. This thesis provides two methods for calculating the virtual machines from the traffic profile. The first one is based on performance testing of the virtual network functions and the second one is based on machine learning technique called multiple linear regression analysis. Furthermore in this work, approximation algorithms are being used in order to solve multidimensional variates of classical optimization problems such as bin packing problem and subset sum problem. These algorithms are used to dimension required resources from the virtual machines to hardware and vice versa. The algorithms are bundled to a program with a graphical user interface to make as user friendly as possible.
  • Pöyry, Outi Irene (2015)
    In the upgraded CMS pixel detector (phase II upgrade), the pixel size will become smaller due to the higher occupancy caused by higher luminosity of the LHC. This means that also the bump bonds between the sensor and the read-out circuit will become smaller, which results in smaller gap between the sensor and the ROC. This will increase the probability for electrical sparking that might destroy the ROC, the sensor or both. Jaakko Härkönen has suggested using alumina passivation on the modules for sparking prevention. In this thesis it was studied whether bonding is applicable on a surface having an alumina passivation. It was also of interest, which parameters of the bonder make stronger bonds. Bonding was tested on metal pads with different layer thicknesses of alumina: 0 nm, 10 nm, 15 nm, 20 nm and 25 nm. The strengths of the bonds were tested using the bond pull test. The results indicate that wire-bonding on alumina does well in pull-strength tests, though the bonds are slightly weaker than on surfaces with no alumina. Increasing bonding force seems to weaken the bonds, increasing bonding power, on the other hand seems to make stronger bonds. The conclusion of this thesis is that alumina is a viable choice for passivation, since it does not seem to have a negative effect on the module wire bonding.
  • Pönni, Arttu (2015)
    The AdS/CFT correspondence is the first realization of the holographic principle. The holographic principle makes the bold statement that in a theory of quantum gravity all information in a region of spacetime can be completely described by information on its boundary. This would make the universe in certain sense a hologram, as our spacetime and everything in it could be described by some fundamental degrees of freedom living on the boundary of spacetime. Gauge/gravity dualities realize the holographic principle by stating that string theory in ten-dimensional spacetime and certain gauge field theories living on its boundary can be equivalent descriptions of the same physics. The AdS/CFT correspondence was the first one of these dualities to be discovered. The correspondence equates type IIB string theory on AdS_5× S^5 with \mathcal{N}=4 super Yang-Mills theory living on four-dimensional Minkowski space, which is the boundary of five-dimensional anti-de Sitter space, AdS_5. In this thesis, we first briefly review the necessary theoretical components, which are combined in the correspondence. Then, the AdS/CFT correspondence is motivated by considering the low energy limit of string theory in a spacetime with a stack of coincident D3-branes. The third part of this work is dedicated to the study of the properties and dynamics of the anti-de Sitter bulk. A black hole solution with an asymptotically AdS background is discussed along with its thermodynamics, and the connection with the dual field theory is emphasized. Then we present a model for a collapsing shell in AdS space and solve its dynamics. The black hole state, which the collapsing shell approaches corresponds to thermal equilibrium in the dual field theory. Lastly, we consider our shell model in the context of the AdS/CFT correspondence and present a method for computing two-point correlation functions on the field theory side. This method is then used to compute retarded correlators in a two-dimensional CFT in finite temperature. We are able to reproduce previous results obtained using different computational methods, following a seminal work of Son and Starinets.
  • Willamo, Teemu (2017)
    This Master’s Thesis deals with stellar magnetic activity, both as it is seen in the Sun, our closest star, and in other, more extreme examples of stellar activity. For the Sun, much higher quality data is available, and for a much longer time, so it remains by far the best studied star, although stars, that are much more magnetically active, have been discovered. This thesis will review the most common forms of magnetic activity, as observed both in the Sun and in other stars. A special focus is given to BY Draconis -type stars. These are young stars, whose photometric brightness variations are caused by large, cool starspots, similar to the sunspots seen on the Sun. A particular BY Draconis -star, V889 Herculis, is analysed in detail, using spectroscopic and photometric methods. From spectroscopy, by the Doppler imaging method, a surface temperature map can be constructed for the star. From photometry, both short-term and long-term variations of the star can be studied. During the 20 years time span of the photometric data, the star seems to have gone through similar activity cycles as the well known 11 year solar cycle. The aim of this thesis is to compare solar and stellar activity in general, and specifically V889 Her to the Sun. Based on the analysis, the basic properties of V889 Her, seen in previous studies, are confirmed: a large polar spot is present on its surface, and there are clear changes in its activity. As the stellar parameters, like mass and surface temperature, of V889 Her are very similar to the Sun, and it is also a single star, the main difference is age. As a young star, V889 Her is still magnetically much more active than the Sun today, but probably it will continue to become more and more similar to the present Sun, losing its high activity with increasing age. Similarly, V889 Her shows us what the Sun most likely was like billions of years ago.
  • Erkkilä, Kukka-Maaria (2016)
    Freshwaters are a source of carbon to the atmosphere in the form of methane (CH4) and carbon dioxide (CO2). Global estimates of the freshwater contribution to the carbon budget are often based on a water boundary layer model (BLM) with gas transfer coefficient k calculated depending solely on wind speed. According to comparison studies, this model gives underestimated emissions and should not be used for more reliable results. A widely used flux measurement method over lakes is the floating chamber (FC) method. FC measures surface flux from a very small area of the lake, so it may not be representative of the whole ecosystem. Measurements are relatively cheap and easy, but also laborious and sporadic. Instead of measuring just a specific point on the lake, eddy covariance (EC) technique provides continuous flux measurements over a much larger source area (footprint). EC systems have been widely used over land areas, but are now growing their popularity in the lake community as well. The aim of this study was to compare EC, FC and BLM methods for CO2 and CH4 fluxes over a boreal lake. The measurements were made at a small dimictic Lake Kuivajärvi in Hyytiälä (Juupajoki, Southern Finland) during an intensive field campaign in September 2014. Manual FC measurements were done at four measurement spots in the EC footprint area 2-3 times a day for catching spatial and temporal variability. Gas transfer velocity for BLM was calculated according to three different parametrizations. Results indicate that BLM fluxes calculated based on water convection and wind driven turbulent gas exchange compare quite well with EC measurements while the model based solely on wind speed is a clear underestimate. FC measurements show about 1.7 times larger flux values than EC. The comparison is more clear for CH4 than CO2 fluxes. The greatest values of CH4 fluxes were measured near the shore, while CO2 flux did not show any spatial variability. After the lake started its autumn mixing, CH4 flux showed a diurnal variation with highest values measured during daytime. There was no diurnal variation before mixing. CO2 flux on the other hand showed diurnal variation only when calculated according to the BLM method.
  • Koskelo, Jaakko (2012)
    Ioninesteet ovat suoloja, joilla on matala sulamislämpötila (alle noin 100 °C). Niillä on useita hyödyllisiä ominaisuuksia ja lukuisia mahdollisia sovelluksia. Tarkempi tieto ioninesteiden atomitason rakenteesta on kuitenkin tärkeää niiden ominaisuuksien ja mahdollisuuksien ymmärtämiseksi sekä sovellusten kehittämiseksi. Tässä työssä tutkittiin 1-3-dimetyyli-imidazoliumkloridia ([mmim]Cl), joka on molekyylimassaltaan kevyt prototyyppinen ionineste. Tässä tutkielmassa hyödynnettiin epäelastista röntgensirontaa uuden informaation saamiseksi. Epäelastisessa röntgensironnassa fotoni siroaa elektronisysteemistä luovuttaen sekä energiaa että liikemäärää. Fotonin epäelastista sirontaa kutsutaan Compton-sironnaksi, kun energian- ja liikemääränsiirto on suuri. Compton-sirontaa voidaan käyttää aineen atomi- ja molekyylitason rakenteen tutkimisessa, sillä Compton-sirontakokeissa määritettävä suure, Compton-profiili, on herkkä atomien välisen geometrian muutoksille. Mittaustulosten tulkinta on kuitenkin haastavaa ja laskennallisella mallintamisella on siinä suuri rooli. Tässä tutkielmassa laskettiin [mmim]Cl:n neste- ja kidefaasien isotrooppisten Compton-profiilien erotus (erotusprofiili). Tiettyjen oletusten ollessa voimassa Compton-profiili riippuu elektronien liikemäärätiheydestä, joten profiilit voidaan määrittää aineen perustilaa kuvaavien elektronirakennelaskujen avulla. Tässä tutkielmassa elektronirakennelaskuissa käytettiin Kohn-Sham-tiheysfunktionaaliteoriaa, periodisia reunaehtoja ja Gaussisia kantajoukkoja elektronitiloille. Lisäksi laskennan tarkkuuteen vaikuttavia tekijöitä arvioitiin. Liikemäärähilan tiheydellä sekä vaihto-korrelaatiofunktionaalin ja kantajoukon valinnalla havaittiin olevan suuri vaikutus laskettuun erotusprofiiliin. Nämä tekijät olivat selkeästi merkittävämpiä kuin nesterakenteiden äärellisestä määrästä johtuva tilastollinen epätarkkuus. Erotusprofiilin tulkitsemiseksi kiderakenteesta otettuun yhteen [mmim]Cl-ionipariin tehtiin muutoksia käytetyn nesterakenteen perusteella ja tarkasteltiin näiden muutosten vaikutusta Compton-profiiliin. Sekä molekylääristen ionien sisäisen rakenteen että ionien välisen geometrian muutosten havaittiin vaikuttavan merkittävästi laskettuun erotusprofiiliin. Tässä työssä esitetyt tulokset auttavat kokeellisen erotusprofiiliin tulkinnassa ja selittämisessä.
  • Järvi, Jari (2018)
    Hybrid organic-inorganic perovskites (HPs) are a novel materials class in photovoltaic (PV) power generation. The PV performance of HPs is impressive, although the microscopic origin of it is not well known due to the complex atomic structure of HPs. Specifically, the disordered mobile organic cations aggravate the use of conventional computational models. I have addressed this structural complexity by developing a multi-scale model that applies quantum mechanical (QM) calculations of small HP supercell models in large coarse-grained structures. With a mixed QM-classical hopping model, I have studied the effects of cation disorder on charge mobility in HPs, which is a key feature to optimize their PV performance. My multi-scale model parametrizes the interaction between neighboring methylammonium cations (MA) in the prototypical HP material, methylammonium lead triiodide (CH3NH3PbI3, or MAPbI3). For the charge mobility analysis with my hopping model, I solved the QM site-to-site hopping probabilities analytically and computed the nearest-neighbor electronic coupling energies from the band structure of MAPbI3 with density-functional theory. I investigated the charge mobility in various MAPbI3 supercell models of ordered and disordered MA cations. My results indicate a structure-dependent mobility, in range of 50–66 cm2/Vs, with the highest observed in the ordered tetragonal phase. My multi-scale model enables the study of long-range atomistic processes in complex structures in an unprecedented scale with QM accuracy, with potential applications way beyond this study.
  • Aho, Noora (2017)
    Cytochrome bc1, also known as complex III, is the third enzyme of the electron transfer chain in cellular respiration, which is the main process generating energy in living cells. Complex III operates by oxidizing ubiquinol, and transferring two electrons to cytochrome c, while reducing ubiquinone. The electron transfer is coupled to proton translocation across the inner mitochondrial membrane. Thus, complex III contributes to generation of a proton electrochemical gradient, which is required for the function of ATP synthase. Cardiolipins (CLs), constituting up to 20 mol % of lipids in the inner mitochondrial membrane, have an important role in the structure and dynamics of the membrane, as well as in maintaining the correct function of the whole electron transfer chain. Cardiolipins are especially vulnerable to oxidation by reactive oxygen species (ROS) due to their dimeric structure with four doubly unsaturated acyl chains. Cytochrome bc1 is one of the main producers of ROS in mitochondria, increasing the exposure of tightly bound CLs to oxidation. Oxidative stress and CL oxidation have been associated with, for instance, programmed cell death and aging, and developing Alzheimer's and Parkinson's diseases. The objective of this thesis was to build a new computational model of cytochrome bc1 in a membrane, and to study the lipid interactions of complex III using atomistic molecular dynamics simulations. A model system with a high-resolution structure of complex III, embedded in a multicomponent bilayer mimicking the inner mitochondrial membrane was constructed. Four atomistic simulations of 1 μs each were performed to reveal possible cardiolipin binding sites and to examine the effects of CL oxidation on the complex. Altogether, eight CL binding sites on cytochrome bc1 were found, out of which two have not been suggested previously. The key residues of each binding site were listed, to compare with earlier results, and to identify the new binding sites in detail. In order to investigate the effects of CL oxidation, carboxylic acid and hydroperoxyl groups were attached to the acyl chains of three crystallographically resolved CLs. The oxidized region of the CL tails changed the nature of interactions with the protein and the surrounding water. As the tail was oxidized, the results showed an increase in the number of water molecules surrounding it. Additionally, the oxidized tails were found to affect the configuration of CL by bending the tail towards the lipid headgroup, or by reaching out to the water interface of the opposite leaflet. Normally, the acyl chains of CL mostly interact with the nonpolar residues of the protein. After oxidation, the number of polar and charged amino acids in the vicinity of the acyl chain increased.
  • Fridlund, Christoffer (2016)
    Ion interaction with matter plays an important role in the modern silicon based micro- and nanoindustry. Ions accelerated to significant energies are able to penetrate into materials allowing for controlled tailoring of the materials' properties. However, it is extremely important to understand the nature of these interactions, and computer modelling is by far the most suitable technique for this purpose. The models used in ion irradiation software are either based on the binary collision approximation (BCA) or molecular dynamics (MD). The first mentioned is both the oldest and the most widely used one. There are three reasons for this: the simple idea, the fast calculation speeds, and the user-friendly graphical user interfaces distributed with the codes. However, there are still some pitfalls in accuracy compared to MD. MDRANGE, an ion range MD code, developed at the Accelerator Laboratory of the University of Helsinki, combines the accuracy of MD with the speed of the BCA. If the tool is given a graphical user interface, it would become more appealing to scientists not familiar with programming. Different methods and techniques for calculating the penetration depths and ranges of kinetic ions in solids are presented in this work. They are accompanied by an overview of the mathematics allowing them to be as physically accurate as possible, over reasonable computation times. For both BCA and MD, generally, the computationally most demanding part is the calculation of the interactions between two or more particles. These interactions are handled through evaluation of potential functions developed especially for different combinations of atoms. The graphical user interface developed in this work is meant as a robust setup tool for use with MDRANGE. The separation of parameters into different panels and the main functionality of the different parts are presented in detail. It is possible to generate the three mandatory input files (coords.in, elstop.in, and param.in) with the tool. Out of these three files, param.in is the file in main focus when the application is used. In addition to the generation of the three files, there are also functions included for investigating range calculation results in real time during simulations. During the last five decades, there has been a huge development of the simulation models intended for ion irradiation processes. Even though BCA models excel in speed, they are not able to compete with MD in simulating many-body interactions for atoms with kinetic energies lower than 1 keV. MDRANGE was developed as a bridge between the two models to allow for faster MD calculations, comparable to BCA calculations, while still taking into account the many-body interactions for ions with lower speeds. With the graphical user interface, developed in this work, it will become even more appealing to scientists not familiar with programming, but still in need of an ion range calculation software.
  • Pokharel, Pramod (2014)
    An aerosol is a colloid of fine solid particles or liquid droplet, in air or another gas (Hinds, 1999).The total carbon (TC) in carbonaceous aerosols can be divided into Inorganic Carbon (IC), Organic Carbon (OC) and Elemental Carbon (EC).We measured carbonaceous aerosols at theSMEAR II (Station for measuring ecosystem atmosphere relations) in southern Finland based on division of Atmospheric Sciences of University of Helsinki. The measurements were carried out continuously from 2005 using different instruments. We used a thermal-optical method to analyze carbonaceous aerosols at Hyytiälä and examineddiurnal and seasonal variation in EC, OC, TC, OC/EC ratio, black carbon (BC) and organics. We found the mean concentrations of EC and OC estimated using Sunset Lab. OCEC analyzer and BC measured using Magee scientific Aethalometer were 0.22±0.19 µgC/m3, 1.53±0.92µgC/m3 and 0.35±0.30 µg/m3 respectively. Whereas the average concentration of BC measured using MAAP was 0.2±0.2 µg/m3. We also found concentrations of EC and BC to be low in summer and high in winter, whereas, the opposite was true for OC. EC, OC and BC showed no significant diurnal cycle, but clear seasonal cycle in all carbonaceous aerosols was evident. The aerosols mass measured by aerosolmass spectrometer (AMS) showed 58% organics, 28% sulfates, 5% nitrates, 9% ammonium and less than 1% chlorides at Hyytiälä.The primary organic carbon (POC) and secondary organic carbon (SOC) estimated by EC-tracer method contributed 5% and 95% respectively to the OC concentrations at Hyytiälä. The relations between different measurement instruments showed a good agreement.
  • Prittinen, Taneli (2017)
    Tässä työssä kehitettiin SQUID-pohjainen laitteisto helium-3:lla tehtäviä NMR-mittauksia varten ja suoritettiin mittauksia sekä nk. jatkuvan aallon (continous wave) NMR:llä että pulssitetun aallon (pulsed wave) menetelmällä. Helium-3:n korkean hinnan (n. 5000 euroa/litra) takia työssä käytettiin testitarkoituksiin NMR-materiaaleina myös fluoria sisältävää teflonia ja vetyä sisältävää jäätä. Laitteisto suunniteltiin ja rakennettiin Aalto-yliopiston O.V. Lounasmaa -laboratoriossa, nykyiseltä nimeltään Low Temperature Laboratory. NMR eli ydinmagneettinen resonanssi on ilmiö jossa ydinspinilliset atomiytimet sijoitetaan staattiseen magneettikenttään ja viritetään niitä ulkoisella sähkömagneettisella säteilyllä, jonka jälkeen niiden viritystila purkautuu vapauttaen NMR-signaalin. Tällä tavalla pystytään tutkimaan monia aineen eri ominaisuuksia. SQUID eli Superconducting Quantum Interference Detector taas on nimensä mukaisesti kvantti-interferenssiin perustuva laite, joka kykenee havaitsemaan erittäin pieniä magneettikenttiä. NMR:n yhteydessä se on tehokas esivahvistin, jonka avulla voidaan havaita hyvin pieniäkin signaaleja. Tässä työssä sillä on tarkoitus parantaa signaali-kohinasuhdetta verrattuna perinteisiin puolijohde-esivahvistimiin ja saada aikaan ilmaisin jolla voidaan mitata myös matalammilla taajuuksilla kuin tutkimusryhmällä on nykyisin käytössä. Suoritettujen mittausten perusteella laitteisto kykeni havaitsemaan NMR-signaalin jatkuvan aallon menetelmällä jokaisesta tutkitusta aineesta. Pulssitettuja mittauksia ei vielä toistaiseksi onnistuttu tekemään onnistuneesti johtuen heliumin pitkähköstä, n. 30 sekunnin, relaksaatioajasta joka teki pidemmistä mittaussarjoista vaikeita toteuttaa. Vastaavasti kahdella kiinteällä aineella, teflonilla ja jäällä, resonanssin leveys oli niin suuri että energian absorbointi pulsseilla näytteeseen olisi hankalaa ja tuottaisi signaaleja joiden pienuus tekisi niistä hankalasta havaittavia, joten näitä aineita tutkittiin tässä työssä vain jatkuvan aallon menetelmällä.
  • Kangasaho, Vilma Eveliina (2018)
    The goal of this study is to ascertain whether methane (CH4) emissions can be estimated source-wise by utilising stable isotope observations in the CarbonTracker Data Assimilation System (CTDAS). The global CH4 budget is poorly known and there are uncertainties in the spatial and temporal distributions as well as in the magnitude of different sources. In this study CTDAS-13CH4 atmospheric inverse model is developed. CTDAS-13CH4 is based on ensemble Kalman filer (EnKF) and used to estimate CH4 fluxes on a region and weekly resolution by implementing CH4 and δ13C-CH4 observations. Anthropogenic biogenic emissions (rice cultivation, landfills and waste water treatments and enteric fermentation and manure management) and anthropogenic non-biogenic emissions (coal, residential and oil and gas) are optimised. Different emission sources can be identified by using process-specific isotopic signature values, δ13C-CH4, because different processes produce CH4 with different isotopic ratio. Optimisation of anthropogenic biogenic emissions increased the total emissions from the prior in eastern North America by 34%, while the optimisation of anthropogenic non-biogenic emissions increased only by 14%. In western North America the corresponding changes were −39% and 9%, respectively. In western parts of Europe, total emissions from prior increased in anthropogenic biogenic optimisation by 18% and decreased in non-biogenic by 3%. Optimisation of anthropogenic biogenic and non-biogenic emissions in the total CH4 budget did not give complete emission estimates, because the optimisation did not include all emission sources and source-specific δ13C-CH4 values were assumed not to vary regionally. However, the modelled concentrations from the optimisation of anthropogenic non-biogenic emissions agreed with the observations of CH4 concentration and δ13C-CH4 values better. Therefore, one could say that the optimisation of anthropogenic non-biogenic emissions was more successful. This study provides reliable information of the magnitude of anthropogenic biogenic and non-biogenic emissions in regions with sufficient observational coverage. The next step in evaluating the spatial and temporal distributions and magnitude of different CH4 sources will be optimising all emission sources simultaneously in a multi-year simulation.
  • Nummelin, Aleksi (Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2012)
    The Meridional overturning circulation (MOC) is one crucial component in Earth's climate system, redistributing heat round the globe. The abyssal limb of the MOC is fed by the deep water formation near the poles. A basic requirement for any successful climate model simulation is the ability to reproduce this circulation correctly. The deep water formation itself, convection, occurs on smaller scales than the climate model grid size. Therefore the convection process needs to be parameterized. It is, however, somewhat unclear how well the parameterizations which are developed for turbulence can reproduce the deep convection and associated water mass transformations. The convection in the Greenland Sea was studied with 1-D turbulence model GOTM and with data from three Argo floats. The model was run over the winter 2010-2011 with ERA-Interim and NCEP/NCAR atmospheric forcings and with three different mixing parameterizations, k-e, k-kL (Mellor-Yamada) and KPP. Furthermore, the effects of mesoscale spatial variations in the atmospheric forcing data were tested by running the model with forcings taken along the floats' paths (Lagrangian approach) and from the floats' median locations (Eulerian approach). The convection was found to happen by gradual mixed layer deepening. It caused salinity decrease in the Recirculating Atlantic Water (RAW) layer just below the surface while in the deeper layers salinity and density increase was clearly visible. A slight temperature decrease was observed in whole water column above the convection depth. Atmospheric forcing had the strongest effect on the model results. ERA-interim forcing produced model output closer to the observations, but the convection begun too early with both forcings and both generated too low temperatures in the end. The salinity increase at mid-depths was controlled mainly by the RAW layer, but also atmospheric freshwater flux was found to affect the end result. Furthermore, NCEP/NCAR freshwater flux was found to be large enough (negative) to become a clear secondary driving factor for the convection. The results show that mixing parameterization mainly alters the timing of convection. KPP parameterization produced clearly too fast convection while k-e parameterization produced output which was closest to the observations. The results using Lagrangian and Eulerian approaches were ambiguous in the sense that neither of them was systematically closer to the observations. This could be explained by the errors in the reanalyzes arising from their grid size. More conclusive results could be produced with the aid of finer scale atmospheric data. The results, however, clearly indicate that atmospheric variability in scales of 100 km produces quantifiable differences in the results.
  • Sandhu, Jaspreet (2013)
    This thesis aims to cover the central aspects of the current research and advancements in cosmic topology from a topological and observational perspective. Beginning with an overview of the basic concepts of cosmology, it is observed that though a determinant of local curvature, Einstein's equations of relativity do not constrain the global properties of space-time. The topological requirements of a universal space time manifold are discussed, including requirements of space-time orientability and causality. The basic topological concepts used in classification of spaces, i.e. the concept of the Fundamental Domain and Universal covering spaces are discussed briefly. The manifold properties and symmetry groups for three dimensional manifolds of constant curvature for negative, positive and zero curvature manifolds are laid out. Multi-connectedness is explored as a possible explanation for the detected anomalies in the quadrupole and octopole regions of the power spectrum, pointing at a possible compactness along one or more directions in space. The statistical significance of the evidence, however, is also scrutinized and I discuss briefly the bayesian and frequentist interpretation of the posterior probabilities of observing the anomalies in a ΛCDM universe. Some of the major topologies that have been proposed and investigated as possible candidates of a universal manifold are the Poincare Dodecahedron and Bianchi Universes, which are studied in detail. Lastly, the methods that have been proposed for detecting a multi-connected signature are discussed. These include ingenious observational methods like the circles in the sky method, cosmic crystallography and theoretical methods which have the additional advantage of being free from measurement errors and use the posterior likelihoods of models. As of the recent Planck mission, no pressing evidence of a multi connected topology has been detected.