Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by discipline "Fysiikka"

Sort by: Order: Results:

  • Fridlund, Christoffer (2016)
    Ion interaction with matter plays an important role in the modern silicon based micro- and nanoindustry. Ions accelerated to significant energies are able to penetrate into materials allowing for controlled tailoring of the materials' properties. However, it is extremely important to understand the nature of these interactions, and computer modelling is by far the most suitable technique for this purpose. The models used in ion irradiation software are either based on the binary collision approximation (BCA) or molecular dynamics (MD). The first mentioned is both the oldest and the most widely used one. There are three reasons for this: the simple idea, the fast calculation speeds, and the user-friendly graphical user interfaces distributed with the codes. However, there are still some pitfalls in accuracy compared to MD. MDRANGE, an ion range MD code, developed at the Accelerator Laboratory of the University of Helsinki, combines the accuracy of MD with the speed of the BCA. If the tool is given a graphical user interface, it would become more appealing to scientists not familiar with programming. Different methods and techniques for calculating the penetration depths and ranges of kinetic ions in solids are presented in this work. They are accompanied by an overview of the mathematics allowing them to be as physically accurate as possible, over reasonable computation times. For both BCA and MD, generally, the computationally most demanding part is the calculation of the interactions between two or more particles. These interactions are handled through evaluation of potential functions developed especially for different combinations of atoms. The graphical user interface developed in this work is meant as a robust setup tool for use with MDRANGE. The separation of parameters into different panels and the main functionality of the different parts are presented in detail. It is possible to generate the three mandatory input files (coords.in, elstop.in, and param.in) with the tool. Out of these three files, param.in is the file in main focus when the application is used. In addition to the generation of the three files, there are also functions included for investigating range calculation results in real time during simulations. During the last five decades, there has been a huge development of the simulation models intended for ion irradiation processes. Even though BCA models excel in speed, they are not able to compete with MD in simulating many-body interactions for atoms with kinetic energies lower than 1 keV. MDRANGE was developed as a bridge between the two models to allow for faster MD calculations, comparable to BCA calculations, while still taking into account the many-body interactions for ions with lower speeds. With the graphical user interface, developed in this work, it will become even more appealing to scientists not familiar with programming, but still in need of an ion range calculation software.
  • Prittinen, Taneli (2017)
    Tässä työssä kehitettiin SQUID-pohjainen laitteisto helium-3:lla tehtäviä NMR-mittauksia varten ja suoritettiin mittauksia sekä nk. jatkuvan aallon (continous wave) NMR:llä että pulssitetun aallon (pulsed wave) menetelmällä. Helium-3:n korkean hinnan (n. 5000 euroa/litra) takia työssä käytettiin testitarkoituksiin NMR-materiaaleina myös fluoria sisältävää teflonia ja vetyä sisältävää jäätä. Laitteisto suunniteltiin ja rakennettiin Aalto-yliopiston O.V. Lounasmaa -laboratoriossa, nykyiseltä nimeltään Low Temperature Laboratory. NMR eli ydinmagneettinen resonanssi on ilmiö jossa ydinspinilliset atomiytimet sijoitetaan staattiseen magneettikenttään ja viritetään niitä ulkoisella sähkömagneettisella säteilyllä, jonka jälkeen niiden viritystila purkautuu vapauttaen NMR-signaalin. Tällä tavalla pystytään tutkimaan monia aineen eri ominaisuuksia. SQUID eli Superconducting Quantum Interference Detector taas on nimensä mukaisesti kvantti-interferenssiin perustuva laite, joka kykenee havaitsemaan erittäin pieniä magneettikenttiä. NMR:n yhteydessä se on tehokas esivahvistin, jonka avulla voidaan havaita hyvin pieniäkin signaaleja. Tässä työssä sillä on tarkoitus parantaa signaali-kohinasuhdetta verrattuna perinteisiin puolijohde-esivahvistimiin ja saada aikaan ilmaisin jolla voidaan mitata myös matalammilla taajuuksilla kuin tutkimusryhmällä on nykyisin käytössä. Suoritettujen mittausten perusteella laitteisto kykeni havaitsemaan NMR-signaalin jatkuvan aallon menetelmällä jokaisesta tutkitusta aineesta. Pulssitettuja mittauksia ei vielä toistaiseksi onnistuttu tekemään onnistuneesti johtuen heliumin pitkähköstä, n. 30 sekunnin, relaksaatioajasta joka teki pidemmistä mittaussarjoista vaikeita toteuttaa. Vastaavasti kahdella kiinteällä aineella, teflonilla ja jäällä, resonanssin leveys oli niin suuri että energian absorbointi pulsseilla näytteeseen olisi hankalaa ja tuottaisi signaaleja joiden pienuus tekisi niistä hankalasta havaittavia, joten näitä aineita tutkittiin tässä työssä vain jatkuvan aallon menetelmällä.
  • Peltonen, Jussi (2019)
    FINIX is a nuclear fission reactor fuel behaviour module developed at VTT Technical Research Centre of Finland since 2012. It has been simplified in comparison to the full-fledged fuel performance codes to improve its usability in coupled applications, by reducing the amount of required input information. While it has been designed to be coupled on a source-code level with other reactor core physics solvers, it can provide accurate results as a stand-alone solver as well. The corrosion that occurs on the interface between nuclear fuel rod cladding and reactor coolant is a limiting factor for the lifespan of a fuel rod. Of several corrosion phenomena, oxidation of the cladding has been studied widely. It is modelled in other fuel performance codes using semiempirical models based on several decades of experimental data. This work aims to implement cladding oxidation models in FINIX and validate them against reference data from experiments and the state-of-the-art fuel performance code FRAPCON-4.0. In addition to this, the models of cladding-coolant heat transfer and coolant conditions are updated alongside to improve the accuracy of the oxidation predictions in stand-alone simulations. The theory of the cladding oxidation, water coolant models and general structure of FINIX and reactor analysis will be studied and discussed. The results of the initially implemented cladding oxidation models contain large errors, which indicates that FINIX does not account for the axial temperature difference between the bottom and the top of the rod in the coolant. This was corrected with the updates to the coolant models, which calculate various properties of a water coolant based on International Association for the Properties of Water and Steam (IAWPS) industrial water correlations to solve the axial temperature increase in a bulk coolant. After these updates the predictions of cladding oxidation improved and the validity of the different oxidation models were further analyzed in the context of FINIX.
  • Riekki, Tapio (2016)
    Helium has two stable isotopes: more common 4He with four nucleons, and the very rare 3He with three nucleons. At sufficiently low temperature, helium can become superfluid that has no viscosity. This transition is quantum mechanical in nature, and since bosonic 4He and fermionic 3He follow different quantum statistics, there is a significant difference in the transition temperature between them. It is about 2 K for pure 4He, but for pure 3He it is three orders of magnitude lower, around 1 mK. 3He – 4He mixtures also have several interesting properties at very low temperatures, such as the finite solubility of 3He in 4He even at absolute zero limit. However, at kelvin range, where our experiment took place, the notable feature is the shifting of the supefluid transition temperature of 4He to a lower temperature due to addition of 3He. Bulk superfluid helium can support two different sound modes: first sound is ordinary pressure (or density) wave, whereas second sound is a temperature (or entropy) wave, unique to superfluid systems. In inviscid superfluid systems, temperature fluctuations can propagate as second sound wave, but in normal systems, on the other hand, this is not possible, as all temperature fluctuations are strongly damped. First sound and second sound do not usually exist independent of each other, rather pressure variations are accompanied by variations in temperature, and vice versa. In this thesis, we studied experimentally the coupling between first and second sound in dilute 3He - superfluid 4He mixtures, at saturated vapor pressure, at temperatures between 2.2 K and 1.7 K, and at 3He concentrations ranging from 0 % to 11%, using a quartz tuning fork mechanical oscillator. Second sound that is coupled to first sound can create anomalies in the resonance response of the quartz tuning fork, so-called second sound resonances. We learned that there exists a temperature and concentration region, where these anomalies disappear, which would indicate two sound modes decoupling from each other. We also present a hydrodynamical model that correctly predicts the decoupling behavior.
  • Ruuth, Riikka (2015)
    Electrical breakdowns occasionally occur near the first walls of fusion reactor chambers and the accelerating cavities of linear colliders, such as CLIC. These arcing events are localised plasma discharges, which are formed under high voltage electrical fields. Vacuum arcs cause various surface damage on the fusion reactor and linear accelerator structures. The surface damage, most significantly craters, have been studied experimentally, but the mechanism of the formation of this damage is still not clear. In this thesis we use the large-scale molecular dynamics simulations to study crater formation on Cu surface. We used ion irradiation to model the arcing events, where plasma ions are accelerated via the shield potential towards the metal surface. This ion irradiation causes multiple overlapping cascades in Cu surface, what can lead to the crater formation. The main goal was to be able to produce surface damage, which is identical to experimental results. Our results are divided to three categories. First we examined which are initial conditions needed to form experimental like craters. The electric field emission current accompanying the plasma discharge process, most likely, is to heat the sample locally to very high temperatures. Therefore we tested molten and solid structures at different temperatures, as well as different scenarios of cooling of the sample via electronic heat conduction. Second, we examined how different variables, such as the fluence of the ions, the energy flux or the potential model, affect on the crater shape. These results were compared with the experimental crater profiles in order to find out reasonable values. We also analysed how the volume of the produced crater depends on fluence. Our third part of investigation was not actually concentrated on the surface damage, but on dislocations and other damage under the surface. We again studied how different parameters affect on the results. We compared the simulations by calculating the number and ratio of non-FCC atoms in the bulk. The fluence dependency of the defects was studied as well.
  • Saarinen, Juho (2013)
    Creep is time dependent plastic malformation of solids, that happen in static stress and temperature when threshold values are met. Creep occurs at high temperature, meaning temperature more than 30% of material's absolute melting temperature (this limit is a little lower with plastics, and higher in ceramics). The malformations it causes can lead to rupture, which usually happen in a short time compared to the duration of the whole process. The creep effect itself is known from already the 19th century, and for metals it's quite clear that diffusion is always present in creep (Coble and Nabarro-Herring creep), and that dislocations can increase the rate of creep strain. Effects of creep can be seen e.g. in power plants and engines, where turbine blades, turbines, pipes and vessels are all the time at high temperature and stress. Also creep relaxation is often 'loosening' bolts which needs to be retightened. In regular office creep can be seen in paper clips, especially in plastic ones, which relax and lose grip fast because of the low melting point of plastics. Creep, because it usually needs long time to be visible, has been part of accidents, too, e.g. in '9/11'. Creep appears in 3 stages (primary (transient), secondary (steady-state), tertiary), and depending on the application, either secondary or tertiary is the most important one. The secondary creep is important for displacement-, buckling- and relaxation-limited situations, and tertiary for the rupture-limited ones.
  • Meaney, Alexander (2015)
    X-ray computed tomography (CT) is widely used in medical imaging and materials science. In this imaging modality, cross-sectional images of a physical object are formed by taking numerous X-ray projections from different angles and then applying a reconstruction algorithm to the measured data. The cross-sectional slices can be used to form a three-dimensional model of the interior structure of the object. CT is a prime example of an inverse problem, in which the aim is to recover an unknown cause from a known effect. CT technology continues to develop, motivated by the desire for increased image quality and spatial resolution in reconstructions. In medical CT, reducing patient dose is a major goal. The branch of CT known as X-ray microtomography (micro-CT) produces reconstructions with spatial resolutions in the micrometer range. Micro-CT has been practiced at the University of Helsinki since 2008. The research projects are often interdisciplinary, combining physics with fields such as biosciences, paleontology, geology, geophysics, metallurgy and food technology. This thesis documents the design and construction of a new X-ray imaging system for computed tomography. The system is a cone beam micro-CT scanner intended for teaching and research in inverse problems and X-ray physics. The scanner consists of a molybdenum target X-ray tube, a sample manipulator, and a flat panel detector, and it is built inside a radiation shielding cabinet. Measurements were made for calibrating the measurement geometry and for testing reconstruction quality. Two-dimensional reconstructions of various samples were computed using the plane which passes through the X-ray point source and is perpendicular to the axis of rotation. This central plane of the cone beam reduces to fan beam geometry. All reconstructions were computed using the filtered backprojection (FBP) algorithm, which is the industry standard. Tomographic reconstructions of high quality were obtained from the measurements. The results show that the imaging system is well suited for CT and the study of reconstruction algorithms.
  • Sillanpää, Tom (2019)
    Linear elastic properties of ex vivo porcine lenses were characterized by compression and indentation tests. Compression tests were performed on un-glued lenses (N = 76), the average stiffness 12 ± 3 kPa (± sigma) and thickness 7.0 ± 0.3 mm were measured at 24 - 30 hours post mortem. For glued lenses (N = 70), the average stiffness was 15 ± 4 kPa and thickness 7.2 ± 0.4 mm. The shear modulus was 1.5 ± 0.3 kPa (N = 10) on average measured at 12 hours post mortem with indentation test. Compared to intact lenses, decapsulated lenses were 41% less stiff (N = 5) as measured with the compression test and the shear modulus was 65 % less (N = 10) as determined by indentation.
  • Rydman, Walter (Helsingin yliopistoUniversity of HelsinkiHelsingfors universitet, 2001)
  • Nissinen, Tuomas (2015)
    Particle Induced X-ray Emission (PIXE) is an ion beam analysis technique. In PIXE, atoms in the sample are excited when the sample is bombarded with protons, alpha particles, or heavy ions. X-rays are emitted when atoms in the sample de-excite. Each element has unique characteristic x-rays. In the spectrum, area of each peak is proportional to the elemental concentration in the sample. The existing PIXE set-up in the accelerator laboratory was upgraded to external beam PIXE to do in air measurements, because of need to analyse large amounts of archaeological samples. Different exit window set-ups were constructed and tested. The goal was to get maximum beam spot area with minimum beam energy loss in the exit window. The set-up enables the use of 100 nm thick Si3N4 exit window membranes and 4-mm-diameter beam spot area. For the measurements in the current work, a 500 nm thick Si3N4 membrane was used due to its higher durability. Current measurement can be difficult when doing PIXE in air because of ionization of air molecules in the beam's path and charge collection differences at sample surface. The set-up utilizes a beam profile monitor (BPM), which measures the current in vacuum prior to the exit window, and therefore is not affected by the current measurement difficulties in air. Along with the BPM, a current integrator was also used in the current measurements. Current integrator was used to collect the charge from the sample holder. These two methods together provided reliable way of current measurement. With the developed set-up, 166 pottery pieces from the neolithic stone age from different parts of Finland, Sweden and Estonia, were measured to determine their elemental concentrations for provenance research. AXIL software was used to analyse the spectra.
  • Byggmästar, Jesper (2016)
    Interatomic potentials are used to describe the motion of the individual atoms in atomistic simulations. An accurate treatment of the interatomic forces in a system of atoms requires heavy quantum mechanical calculations, which are not computationally feasible in large-scale simulations. Interatomic potentials are computationally more efficient analytical functions used for calculating the potential energy of a system of atoms, allowing simulations of larger systems or longer time scales than in quantum mechanical simulations. The interatomic potential functions must be fitted to known properties of the material the potential describes. Developing a potential for a specific material typically involves fitting a number of parameters included in the functional form, against a database of important material properties, such as cohesive, structural, and elastic properties of the relevant crystal structures. In the Tersoff-Albe formalism, the fitting is performed with a coordination-based approach, where structures in a wide range of coordination numbers are used in the fitting process. Including many differently coordinated structures in the fitting database is important to get good transferability to structures not considered in the fitting process. In this thesis, we review different types of widely used interatomic potentials, and develop an iron-oxygen potential in the Tersoff-Albe formalism. We discuss the strengths and weaknesses of the developed potential, as well the challenges faced in the fitting process. The potential was showed to successfully predict the energetics of various oxygen-vacancy defect clusters in iron, and the basic properties of the common iron oxide wüstite. The potential might therefore mainly be applicable to atomistic simulations involving oxygen-based defects in solid iron, such as irradiation or diffusion simulations.
  • Martikainen, Laura (2017)
    Radiation detectors are devices used to detect ionizing radiation. They can be manufactured from different materials for different purposes. Chemical vapour deposition (CVD) diamond detectors are semiconductor radiation detectors manufactured from artificial diamond grown using the CVD method. The physical properties of diamond make diamond detectors fast and radiation hard, and hence they are a favourable option for precise timing measurements in harsh radiation environments. The work presented in this thesis was done as part of a detector upgrade project of the TOTEM experiment at the Large Hadron Collider of CERN, the European Organization for Nuclear Research. The upgrade program includes the development and the building of a timing detector system based on CVD diamond in order to include the capability to perform precise timing measurements of forward protons. A new I-V measurement setup was built for the purpose of quality assurance measurements of diamond crystals before their further processing to timing detectors. When the setup was operated, different problems were observed, including electrical discharging, instabilities of leakage currents and unexpectedly high leakage current levels. The undesired effects disappeared, when the electrical contact used for supplying bias voltage to the measured samples was modified. Results of both quality assurance and measurements for the setup development are presented.
  • Sairanen, Viljami (2013)
    Diffuusiokuvantaminen perustuu magneettikuvauslaitteen avulla mitattuun vesimolekyylien satunnaiseen lämpöliikkeeseen. Pehmytkudoksessa vesimolekyyli diffuntoituu noin 17 mikrometrin matkan 50 millisekunnin aikana ja diffuusiokuvantaminen on ainoa kliininen kuvantamismenetelmä, joka pystyy rekisteröimään näin pientä liikettä ei-invasiivisesti. Tutkimalla, missä suunnissa diffuntoituminen on voimakasta, voidaan paikantaa esimerkiksi valkeasta aivoaineesta hermoratojen reittejä. Tämä edellyttää käytännössä vähintään 20 diffuusiosuunnan kuvaamista, joiden pohjalta lasketaan diffuusion suuntaa ja suuruutta kuvaava diffuusiotensori kuva-alkiokohtaisesti. Menetelmä edellyttää nopeaa kuvausaikaa, jotta fysiologiset virtaukset tai potilaan liike eivät häiritse tasoltaan huomattavasti heikomman lämpöliikkeen rekisteröintiä. Nopea kuvaus puolestaan asettaa laiteteknisiä vaatimuksia gradienttikentille, joita ei anatomisessa T1- tai T2-painotetussa kuvantamisessa esiinny. Gradienttikelojen on pystyttävä toimimaan äärirajoillansa koko kuvauksen ajan, jotta useat peräkkäiset rekisteröinnit eri diffuusiosuunnissa ovat mahdollisia. Optimoinnissa käyttäjä ei voi vaikuttaa laiteteknisiin ratkaisuihin, mutta kuvausparametrien variointi on mahdollista. Edellytyksenä mielekkäälle optimoinnille on kuitenkin valita vertailtavat suureet, joiden perusteella voidaan sanoa, mitkä testatuista vaihtoehdoista paransivat kuvanlaatua. Diffuusiotensorikuvantamiseen (DTI) on ehdotettu laadunvalvontaprotokollaa, joka huomioi kuvausmenetelmän laitetekniset haasteet. Kyseinen julkaisu on ainoa, joka ottaa kantaa useimpiin DTI:n ongelmakohtiin ja on siten luonteva lähtökohta DTI-optimoinnille. Julkaisun menetelmässä tutkittiin DTI-sekvenssin tuottamaa signaalikohinasuhdetta, kuvaussekvenssistä ja indusoituvista pyörrevirroista johtuvia erilaisia geometrisia vääristymiä sekä diffuusiotensorista johdettuja FA- ja MD-arvoja. Työn ensimmäisessä vaiheessa valittiin kliiniseen DTI-sekvenssiin pohjautuva referenssisekvenssi, jota varioitiin yksi kuvausparametri kerrallaan. Muunnellut parametrit olivat kaikuaika, rinnakkaiskuvantamiskerroin, k-avaruuden keräyslaajuus, päämagneettikentän tasoitusalue sekä diffuusiopainotuskerroin eli b-arvo. Varioituja sekvenssejä oli yhteensä 10, joiden pohjalta valittiin kuvanlaatuun myönteisesti vaikuttaneet parametrit työn toiseen vaiheeseen, missä referenssisekvenssiä varioitiin usean parametrin suhteen. Lopputuloksena todettiin, että lyhin mahdollinen kaikuaika 55 ms ja suurin mahdollinen k-avaruuden kartoitusalueparametrin arvo 0,780 kasvattivat signaalikohinasuhdetta 13 %. Rinnakkaiskuvantamiskertoimen kasvattaminen kahdesta kahteen ja puoleen pienensi geometrisia vääristymiä kvalitatiivisessa arviossa, mutta heikensi signaalikohinasuhdetta referenssisekvenssiin verrattuna suurimmillaan vain 5 %. Päämagneettikentän tasoitusalueen valinnalla tai b-arvon pienentämisellä tuhannesta kahdeksaansataan ei havaittu olevan merkittävää vaikutusta kuvanlaadulle fantomitutkimusessa. Tulokset eivät poikenneet teoreettisista ennusteista, mutta toisaalta laiteteknisistä rajoituksista johtuen optimointi ei voi perustua pelkästään teoreettiseen arvioon oikeista parametrien arvoista. Työssä esitettyä menetelmää on mahdollista jatkossa käyttää myös muiden diffuusiopainotettujen sekvenssien optimoinnissa.
  • Naaranoja, Tiina (2014)
    The Large Hadron Collider (LHC) at CERN is currently being started up after a long shutdown. Another similar maintenance and upgrade period is due to take place in a few years. The luminosity and maximum beam energy will be increased after the shutdowns. Many upgrade projects stem from the increased demands from the changed environment and the opportunity of installation work during the shutdowns. The CMS GEM collaboration proposes to upgrade the muon system in CMS experiment by adding Gaseous Electron Multiplier (GEM) chambers. The new GEM-detectors need new Front-End electronics. There are two parallel development branches for mixed-signal ASICs; one comes with analog signal processing (VFAT3-chip) and another with analog and digital signal processing (GdSP-chip). This Thesis covers the development of the digital signal processing for the GdSP-chip. The design is described on algorithm level and with block diagrams. The signal originating in the triple GEM-detector sets special challenges on the signal processing. The time constant in the analog shaper is programmable due to irregularities in the GEM-signal. This in turn poses challenges for the digital signal processing. The pulse peaking time and signal bandwidth depend on the choice made for the time constant. The basic signal processing techniques and needs are common for many detectors. Most of the digital signal processing has shared requirements with an existing, well-tested Front-End chip. Time pick-off and trigger production was not included in these shared tasks. Several time pick-off methods were considered and compared with simulations. The simulations were performed first using Simulink running on Matlab and then on Cadence tools using Verilog hardware description language. Time resolution is an important attribute determined jointly by the detector and the signal processing. It is related to the probability to associate the measured pulse with the correct event. The effect of the different time pick-off methods on time resolution was compared with simulations. Only the most promising designs were developed further. Constant Fraction Discriminator and Pulse Recognition, the two most promising algorithms, were compared against analog Constant Fraction Discriminator and Time over Threshold time pick-off methods. The time resolutions obtained with noiseless signal were found to be comparable. At least in gas detector applications digital signal processing should not be ruled out of fear for deteriorated time resolution. The proposed digital signal processing chain for GdSP includes Baseline Correction, Digital Shaper, Integrator, Zero Suppression and Bunch Crossing Identification. The Baseline Correction includes options for using fixed baseline removal and moving average filter. In addition it contains a small memory, which can be used as test signal input or as look-up-table et cetera. Pole-zero cancellation is proposed for digital shaping. The integrator filters high frequency noise. The Constant Fraction Discriminator was found optimal for Bunch Crossing Identification.
  • Laurila, Tiia (2018)
    Differentiaaliliikkuvuusspektrometri (Differential Mobility Particle Sizer; DMPS) -laitteistoa voidaan käyttää ilmakehän aerosolihiukkasten lukumääräkokojakauman mittaamiseen. DMPS-laitteisto koostuu impaktorista, kuivaajasta, bipolaarisesta diffuusiovaraajasta, differentiaaliliikkuvuusanalysaattorista (Differential Mobility Analyzer; DMA) ja kondensaatio- hiukkaslaskurista (Condensation Particle Counter; CPC). Tässä työssä verrataan DMPS-laitteistossa rinnakkain mittaavan muokatun A20 CPC:n ja TSI 3776 CPC:n laskentastatistiikkaa. Pienimmillä aerosolihiukkasilla on vaikutus ympäristöön ja terveyteen, minkä takia on kasvava tarve mitata tarkasti myös pienimpien hiukkasten kokojakaumaa. Aerosolihiukkasten lukumääräkokojakaumaan ja siitä johdettavien suureiden epävarmuuksia ei kuitenkaan tunneta vielä täysin. Työssä pyritään parantamaan perinteisen CPC:n laskentastatistiikkaa ja tutkimaan lukumääräkokojakauman sekä siitä johdettavien suureiden, kuten muodostumisnopeuden (Formation Rate; J) ja kasvunopeuden (Growth Rate; GR), epävarmuuksia. Perinteinen, ilman suojavirtausta toimiva, CPC voidaan muokata havaitsemaan jopa alle 3 nm hiukkasia kasvattamalla lämpötilaeroa saturaattorin ja kondenserin välillä ja muuttamalla aerosolivirtausta. Tässä työssä A20 CPC:n aerosolivirtaus optiikan läpi kasvatettiin 2.5 litraan minuutissa diffuusiosta johtuvien häviöiden minimoimiseksi ja laskentastatistiikan parantamiseksi. Verrattuna TSI 3776 CPC:hen muokatulla A20 CPC:llä on 50 kertaa suurempi aerosolivirtaus, joten voimme olettaa, että muokattu A20 mittaa TSI 3776 UCPC:hen verrattuna enemmän hiukkasia pienemmällä epävarmuudella. Muokatulla A20 CPC:llä on parempi laskentastatistiikka, jonka ansiosta kokojakauman laskennasta johtuva suhteellinen virhe on pienempi. Muokatulla A20 CPC:llä on TSI 3776 CPC:hen verrattuna 50 kertaa suurempi aerosolivirtaus ja se laskee keskimäärin 50 kertaa enemmän hiukkasia koko DMPS-laitteiston mittaamalla kokoalueella (1-40 nm). Muokatulla A20 CPC:llä laskettu GR on noin 60% suurempi pienimmillä (3-6 nm) hiukkasilla ja noin 3% suurempi 6-11 nm hiukkasilla. Myös J on noin 30% suurempi muokatulla A20 CPC:llä laskettuna 3-6 nm hiukkasille. CPC:n laskennasta johtuva epävarmuus on syytä huomioitava määritettäessä DMPS-mittauksen kokonaisvirhettä. Laskentastatistiikalla on merkitystä paitsi lukumääräkokojakaumaan, myös sen johdannaissuureisiin.
  • Sirkiä, Mika (2020)
    Transparent displays are common. Electroluminescent displays are one example of a transparent display. This focuses on driving electronics for transparent electroluminescent segmented displays. The designed electronics is discussed and its performance is measured. A new driving method is presented. This method increases the luminance of the display and decreases its power consumption.
  • Uusikylä, Eetu (2020)
    Gammaspektrometria on menetelmä gammasäteilyn havaitsemiseen ja sen analysoimiseen. Menetelmää sovelletaan Säteilyturvakeskuksen (STUK) gammalaboratoriossa, jossa mitataan ympäristönäytteiden aktiivisuutta ja radionuklidikoostumusta. Mittaustulosten analysoinnissa tarvitaan tehokkuuskalibrointia, jossa määritetään mitattavan lähteen emittoimien fotonien ja rekisteröityjen pulssien välinen suhde. Työssä selvitettiin EFFTRAN -nimisen ohjelman soveltuvuutta osaksi gammaspektrometristen mittausten analyysiprosessia. EFFTRAN perustuu Efficiency Transfer –menetelmään (ET), jossa analysoitavan näytteen piikkitehokkuuskäyrä johdetaan kokeellisesti määritetystä standardilähteen piikkitehokkuuskäyrästä. EFFTRAN ottaa kalibrointisiirroissa huomioon näytteen alkuainekoostumuksen, mikä vaikuttaa fotonien piikkitehokkuuteen alle 100 keVin energioilla. Työn ensimmäinen tavoite oli osoittaa, että EFFTRANin avulla saadaan yhteneviä tuloksia nykyisten laskentamenetelmien kanssa. Sen jälkeen kalibrointisiirtojen avulla laskettuja aktiivisuuspitoisuuksia verrattiin tunnettuihin referenssilähteiden aktiivisuuspitoisuuksiin. Viimeisessä vaiheessa tarkasteltiin alkuainekoostumuksen vaikutusta gamma-analyysien tuloksiin. EFFTRANin kalibrointisiirtojen avulla saatiin pääosin oikeita tuloksia, joten niitä voitaisiin jatkossa käyttää gamma-analyyseissa kokeellisten tehokkuuskalibrointien rinnalla.
  • Westenius, Mathias (2014)
    Kärnkraftsproducenterna TVO och Fortum har beslutit att deponera kärnavfall i berggrunden i Olkiluoto. Kärnavfallet kommer att placeras i kopparkanistrar, vilka omringas av en buffert bestående av leran bentonit. Buffertens uppgifter är extremt viktiga för slutförvaringens säkerhet, då bufferten skall skydda kanistrarna från externa kontaminanter och seismiska rörelser samt hindra transporten av radioaktivt material. Bentonitens viktigaste komponent är lermineralen montmorillonit. Montmorillonit består av skivor vilka är en nm tjocka och flera hundratals nm breda. Dessa skivor bildar olika strukturer. I detta arbete studerades montmorillonitpartiklarnas former, samt deras andelar, i kolloidala lösningar. Som lösningsämnen användes olika natrium- och kalciumlösningar, vilka motsvarar möjliga omständigheter i berggrunden i Olkiluoto. Lösningarna studerades med lågvinkelröntgenspridning (SAXS) samt med anpassningar av en teoretisk modell utvecklad av Pizzey et al (2009). Med hjälp av dessa kunde partikelstorlekarna, samt andelarna av de olika partiklarna, bestämmas för varje lösning. I alla lösningar påträffades enskilda bentonitskivor samt bentonitskivor i staplar, d.v.s. varierande lager av bentonit och lösning. Stapelstorlekarna, samt deras andelar, berodde på lösningen. I kalciumlösningarna bildades stora, kompakta staplar. Avståndet mellan skivorna var i dessa c. 1 nm och staplarna kunde bestå av över 10 skivor. Andelen av dessa staplar ökade klart med kalciumkoncentrationen. I natriumlösningarna uppstod staplar bestående endast av två skivor. Avståndet mellan skivorna varierade mellan 6-10 nm beroende på natriumkoncentrationen. I blandningar av både kalcium- och natriumlösningar dominerades växelverkan mellan skivorna av kalciumjonerna, d.v.s. stapelstrukturerna bestod till största delen av den kompaktare strukturen.
  • Penttilä, Paavo (Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2009)
    Cellulose can be used as a renewable raw material for energy production. The utilization requires degradation of cellulose into glucose, which can be done with the aid of enzymatic hydrolysis. In this thesis, various x-ray methods were used to characterize sub-micrometer changes in microcrystalline cellulose during enzymatic hydrolysis to clarify the process and factors slowering it. The methods included wide-angle x-ray scattering (WAXS), small-angle x-ray scattering (SAXS) and x-ray microtomography. In addition, the samples were studied with transmission electron microscopy (TEM). The studied samples were hydrolyzed by enzymes of the Trichoderma reesei species for 6, 24, and 75 hours, which corresponded to 31 %, 58 %, and 68 % degrees of hydrolysis, respectively. Freeze-dried hydrolysis residues were measured with WAXS, SAXS and microtomography, whereas some of them were re-wetted for the wet SAXS and TEM measurements. The microtomography measurements showed a clear decrease in particle size in scale of tens of micrometers. In all the TEM pictures similar cylindrical and partly ramified structures were observed, independent of the hydrolysis time. The SAXS results were ambiguous and partly imprecise, but showed a change in the structure of wet samples in scale of 10-30 nm. According to the WAXS results, the degrees of crystallinity and the crystal sizes remained the same. The gained results support the assumption, that the cellulosic particles are hydrolyzed mostly on their surface, since the enzymes are unable to penetrate into the nanopores of wet cellulose. The hydrolysis therefore proceeds quickly in easily accessible particles and leaves the unaccesible particles almost untouched. The structural changes observed in the SAXS measurements might correspond to slight loosening of the microfibril aggregates, which was seen only in the wet samples because of their different pore structure.
  • Musazay, Abdurrahman (2015)
    Perovskites are a class of materials that possess many interesting properties with a wide range of technological applications in the field of optoelectronics and photovoltaics. In recent years, perovskites have gained considerable attention as an inexpensive and easy-to-synthesize light absorbing material for so-called organic-inorganic solar cells. In this study we wish to examine the structural and electronic properties of CH3NH3PbI3 organohalide lead perovskites. Charge transport behaviour between the light harvesting perovskite and the underlying electron transport mesostructure are some of the factors that affect the Power Conversion Efficiencies (PCE) of these devices. Therefore, advanced characterization methods were used to investigate the structural and electronic changes that may occur at the interface. Scanning electron microscopy (SEM) was used to survey the structure and morphology of the samples. It was found that the titania grain sizes were 20-25 nm in size and the perovskite grain sizes from 200 nm to 500 nm. The samples were prepared using a solution processing method, which is widely considered as one of the most cost effective ways for crystal growth. However, our studies show that this method does not provide a full perovskite coverage of the surface (14.4% of surface uncovered) which reduces the light harvesting yield. X-ray diffraction (XRD) was employed to study the crystal structure of the sample. It was concluded that the titania was in the anatase phase and the perovskite in a tetragonal crystal system (space group: I4/mcm), with a cell size of a=8.89 A and c=12.68 A. Moreover, our XRD results reveal the existence of a PbI2 crystal phase, indicating an incomplete conversion of the precursors to the perovskite phase. In order to probe the changes that occur at the interface and to elucidate the electron transport mechanisms, X-ray photoelectron spectroscopy (XPS) was conducted and the core-level spectra was investigated. A shift of 0.44 eV in the binding energy of the Ti 2p line was observed between the titania samples and the titania/perovskite. We hypothesize the origin of this shift to be due to a local screening effect, or the formation of a barrier between the perovskite and the titania that is hindering charge transport and is preventing the compensation for the surface charges lost during photoionization. Based on the findings presented in this thesis we suggest, as a possible research direction for the future, UV Photoelectron Spectroscopy (UPS) for constructing the band alignment schemes with the PbI2 layer included and a thorough investigation of the substrate effects and the synthesis routes on the charge transport dynamics of these systems.