Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by department "Department of Physics"

Sort by: Order: Results:

  • Riekki, Tapio (2016)
    Helium has two stable isotopes: more common 4He with four nucleons, and the very rare 3He with three nucleons. At sufficiently low temperature, helium can become superfluid that has no viscosity. This transition is quantum mechanical in nature, and since bosonic 4He and fermionic 3He follow different quantum statistics, there is a significant difference in the transition temperature between them. It is about 2 K for pure 4He, but for pure 3He it is three orders of magnitude lower, around 1 mK. 3He – 4He mixtures also have several interesting properties at very low temperatures, such as the finite solubility of 3He in 4He even at absolute zero limit. However, at kelvin range, where our experiment took place, the notable feature is the shifting of the supefluid transition temperature of 4He to a lower temperature due to addition of 3He. Bulk superfluid helium can support two different sound modes: first sound is ordinary pressure (or density) wave, whereas second sound is a temperature (or entropy) wave, unique to superfluid systems. In inviscid superfluid systems, temperature fluctuations can propagate as second sound wave, but in normal systems, on the other hand, this is not possible, as all temperature fluctuations are strongly damped. First sound and second sound do not usually exist independent of each other, rather pressure variations are accompanied by variations in temperature, and vice versa. In this thesis, we studied experimentally the coupling between first and second sound in dilute 3He - superfluid 4He mixtures, at saturated vapor pressure, at temperatures between 2.2 K and 1.7 K, and at 3He concentrations ranging from 0 % to 11%, using a quartz tuning fork mechanical oscillator. Second sound that is coupled to first sound can create anomalies in the resonance response of the quartz tuning fork, so-called second sound resonances. We learned that there exists a temperature and concentration region, where these anomalies disappear, which would indicate two sound modes decoupling from each other. We also present a hydrodynamical model that correctly predicts the decoupling behavior.
  • Ruuth, Riikka (2015)
    Electrical breakdowns occasionally occur near the first walls of fusion reactor chambers and the accelerating cavities of linear colliders, such as CLIC. These arcing events are localised plasma discharges, which are formed under high voltage electrical fields. Vacuum arcs cause various surface damage on the fusion reactor and linear accelerator structures. The surface damage, most significantly craters, have been studied experimentally, but the mechanism of the formation of this damage is still not clear. In this thesis we use the large-scale molecular dynamics simulations to study crater formation on Cu surface. We used ion irradiation to model the arcing events, where plasma ions are accelerated via the shield potential towards the metal surface. This ion irradiation causes multiple overlapping cascades in Cu surface, what can lead to the crater formation. The main goal was to be able to produce surface damage, which is identical to experimental results. Our results are divided to three categories. First we examined which are initial conditions needed to form experimental like craters. The electric field emission current accompanying the plasma discharge process, most likely, is to heat the sample locally to very high temperatures. Therefore we tested molten and solid structures at different temperatures, as well as different scenarios of cooling of the sample via electronic heat conduction. Second, we examined how different variables, such as the fluence of the ions, the energy flux or the potential model, affect on the crater shape. These results were compared with the experimental crater profiles in order to find out reasonable values. We also analysed how the volume of the produced crater depends on fluence. Our third part of investigation was not actually concentrated on the surface damage, but on dislocations and other damage under the surface. We again studied how different parameters affect on the results. We compared the simulations by calculating the number and ratio of non-FCC atoms in the bulk. The fluence dependency of the defects was studied as well.
  • Saarinen, Juho (2013)
    Creep is time dependent plastic malformation of solids, that happen in static stress and temperature when threshold values are met. Creep occurs at high temperature, meaning temperature more than 30% of material's absolute melting temperature (this limit is a little lower with plastics, and higher in ceramics). The malformations it causes can lead to rupture, which usually happen in a short time compared to the duration of the whole process. The creep effect itself is known from already the 19th century, and for metals it's quite clear that diffusion is always present in creep (Coble and Nabarro-Herring creep), and that dislocations can increase the rate of creep strain. Effects of creep can be seen e.g. in power plants and engines, where turbine blades, turbines, pipes and vessels are all the time at high temperature and stress. Also creep relaxation is often 'loosening' bolts which needs to be retightened. In regular office creep can be seen in paper clips, especially in plastic ones, which relax and lose grip fast because of the low melting point of plastics. Creep, because it usually needs long time to be visible, has been part of accidents, too, e.g. in '9/11'. Creep appears in 3 stages (primary (transient), secondary (steady-state), tertiary), and depending on the application, either secondary or tertiary is the most important one. The secondary creep is important for displacement-, buckling- and relaxation-limited situations, and tertiary for the rupture-limited ones.
  • Iipponen, Juho (2017)
    Ei ole olemassa mallia, joka pystyisi täydellisesti kuvaamaan monimutkaisen ja kaoottisen ilmakehän käyttäytymistä. Siksi mallien ennusteita on korjattava lähemmäksi ilmakehän todellista tilaa havaintojen avulla. Tässä työssä puoliempiirisen yläilmakehämallin kuvausta termosfäärin kokonaismassatiheydestä yritetään tarkentaa data-assimilaation keinoin. Mallitilan korjaamiseksi käytetään ensemble Kalman -suodinta, joka on osoittautunut hyödylliseksi työkaluksi alailmakehän data-assimilaatiojärjestelmissä. Troposfääristä poiketen termosfäärin tilan ennustamisen epävarmuus liittyy kuitenkin pitkälti epävarmuuteen termosfäärin tilaa ajavista pakotteissa. Ionosfäärin ja Auringon UV-säteilyn äkkinäiset ja vaikeasti ennustettavat muutokset voivat nopeasti muuttaa yläilmakehän tilaa tavalla, jonka aikakehitys on suurelta osin riippumaton termosfäärijärjestelmän alkutilasta. Näin ollen ei ole lainkaan selvää, että data-assimilaatio tarkentaisi mallien analyysiä tai ennustetta. Tämän työn tavoitteena on tutkia, tarkentaako havaintojen avulla korjattu malli keski- ja ylätermosfäärin massatiheydestä tehtävää analyysiä verrattuna malliin, jonka tilaa muuttavat ainoastaan yläilmakehäjärjestelmään kohdistetut pakotteet. Lisäksi tarkastellaan, onko tehdyllä analyysillä ennustearvoa seuraavan kolmen päivän aikana tehtäviin massatiheysmittauksiin nähden. Tutkimusjaksona käytetään vuotta 2003, jolloin pakotteet olivat voimakkaita ja niiden muutokset nopeita. Havaintoaineisto on tuotettu algoritmilla, joka laskee yläilmakehän tiheyden matalan maan kiertoradan satelliittien radoissa havaittujen muutosten avulla. Vaikka aineiston ajallinen erotuskyky on melko huono suhteessa pakotteiden ajamien muutosten nopeuteen, osoittautuu, että sen avulla voidaan tarkentaa yläilmakehämallin analyysiä. Sen sijaan käy ilmi, ettei näin korjatun mallitilan avulla kyetä ennustamaan järjestelmän tilan kehitystä, vaikka termosfääriä ajavien pakotteiden aikakehitys olisi tarkkaan tiedossa. Tämän arvellaan johtuvan siitä, että analyysin avulla tuotettu korjaus on voimakkaasti riippuva pakotteen muutoksista sen ajanjakson aikana, jolta havaintoja analyysiä varten kerätään. Näin ollen korjaus ei ole enää paras mahdollinen seuraavien päivien aikana, jolloin ilmakehän tila on pakotteiden seurauksena muuttunut. Ensemble Kalman -suotimen analyysiin tuoma tarkennus, vaikkakin tilastollistesti merkitsevä, ei ole kovin suuri. Pakotteisiin ja havaintoaineistoon liittyvien epävarmuuksien lisäksi on mahdollista, että suotimen suorituskykyä heikentävät mallin ennakkokentässä esiintyvät harhaanjohtavat korrelaatiot, tai työssä käytetty hyvin yksinkertainen kovarianssin inflaatiomenetelmä.
  • Tomberg, Eemeli (2016)
    In this thesis, we study the decoherence of cosmological scalar perturbations during inflation. We first discuss the FRW model and cosmic inflation. Inflation is a period of accelerated expansion in the early universe, in typical models caused by a scalar field called inflaton. We review cosmological perturbation theory, where perturbations of the inflaton field and scalar degrees of freedom of the metric tensor are combined into the gauge-invariant Sasaki-Mukhanov variable. We quantize this variable using canonical quantization. Then, we discuss how interactions between the perturbations and their environment can lead to decoherence. In decoherence, the reduced density operator of the perturbations becomes diagonal with respect to a particular pointer basis. We argue that the pointer basis for the cosmological scalar perturbations consists of approximate eigenstates of the field value operator. Finally, we discuss how decoherence can help understand the transition from quantum theory to classical perturbation theory, and justify the standard treatment of perturbations and their initial conditions in cosmology. We conclude that since decoherence should not spoil the observationally successful predictions of this standard treatment, it is unlikely that the actual amount of decoherence could be observed in, say, the CMB radiation.
  • Soini, Assi-Johanna (2017)
    Comparing meteorite densities with the densities of small solar system bodies provides clues to the nature of asteroid interiors, especially accretional and collisional processes of asteroids, which reflects the evolution of the early solar nebula. Bjurböle is a L/LL4 ordinary chondrite. Bjurböle meteorites have high friability and porosity compared to other ordinary chondrites. Bjurböle meteorites are compositionally homogeneous and any density variations ascribe their internal structure. In addition, Bjurböle meteorite shower consists of numerous recovered meteorites, thus sampling a large volume of Bjurböle meteoroid. Volumes of ten Bjurböle meteorites ranging in mass from 17.27 g to 13.48 kg were determined using non-contaminating and non-destructive 3D laser scanner and pycnometer. Masses were determined using different scales. Densities were calculated based on the volumes, and porosities were derived from the acquired densities. No trend in density and porosity as a function of meteorite mass was found. Absence of a trend in Bjurböle meteorites can be interpreted based on distribution of strength and porosity within the parent meteoroid body. It suggests that density and porosity are inhomogeneously distributed within parent body and weaker parts are fragmented and disintegrated during atmospheric entry. Only the parts above certain strength survive, and their sizes vary within the parent body forming meteorites ranging in size from grams to tens of kilograms.
  • Meaney, Alexander (2015)
    X-ray computed tomography (CT) is widely used in medical imaging and materials science. In this imaging modality, cross-sectional images of a physical object are formed by taking numerous X-ray projections from different angles and then applying a reconstruction algorithm to the measured data. The cross-sectional slices can be used to form a three-dimensional model of the interior structure of the object. CT is a prime example of an inverse problem, in which the aim is to recover an unknown cause from a known effect. CT technology continues to develop, motivated by the desire for increased image quality and spatial resolution in reconstructions. In medical CT, reducing patient dose is a major goal. The branch of CT known as X-ray microtomography (micro-CT) produces reconstructions with spatial resolutions in the micrometer range. Micro-CT has been practiced at the University of Helsinki since 2008. The research projects are often interdisciplinary, combining physics with fields such as biosciences, paleontology, geology, geophysics, metallurgy and food technology. This thesis documents the design and construction of a new X-ray imaging system for computed tomography. The system is a cone beam micro-CT scanner intended for teaching and research in inverse problems and X-ray physics. The scanner consists of a molybdenum target X-ray tube, a sample manipulator, and a flat panel detector, and it is built inside a radiation shielding cabinet. Measurements were made for calibrating the measurement geometry and for testing reconstruction quality. Two-dimensional reconstructions of various samples were computed using the plane which passes through the X-ray point source and is perpendicular to the axis of rotation. This central plane of the cone beam reduces to fan beam geometry. All reconstructions were computed using the filtered backprojection (FBP) algorithm, which is the industry standard. Tomographic reconstructions of high quality were obtained from the measurements. The results show that the imaging system is well suited for CT and the study of reconstruction algorithms.
  • Smolander, Tuomo (2018)
    Remote sensing of soil permittivity and soil freezing was investigated using two different satellite based microwave radars: ASCAT and ASAR. ASCAT is a scatterometer with a good temporal resolution but coarse spatial resolution. ASAR is a synthetic aperture radar and has fine spatial resolution, but lacks good temporal coverage. Soil permittivity is related to soil moisture, which is considered an essential climate vari- able since it has an effect on both weather and climate. Soil freezing affects hydrological and carbon cycles, surface energy balance, photosynthesis of vegetation and the activity of soil microbes. A semi-empirical model for backscattering of forested land was used to acquire soil permittivity retrievals from satellite measurements using the method of least squares. The onset of soil freezing was determined from the permittivity retrievals using a simple threshold method. A five year time series of satellite observations from July 2007 to June 2012 (April 2012 for ASAR) was investigated in Sodankylä in Northern Finland. The satellite based retrievals were compared against in situ measurements of soil permittivity, soil temperature, soil frost and snow depth. According to the results the satellite permittivity retrievals correlate with each other, but not with in situ permittivity measurements. ASCAT retrieval shows some correlation with in situ temperature measurements, which could impair its correlation with in situ permittivity. The explanation for this phenomenon needs further research. Comparison of soil freezing onset dates from satellite retrievals with in situ soil temperature and soil frost measurements showed quite good agreement for most years, and did not seem to be affected by first snowfall, even though the permittivity retrievals appeared to react in a similar way to snow cover and soil freezing. This indicates that with better calibration of the permittivity threshold limit this method could be used for soil freeze detection. Auxiliary information about air temperature and snow cover could also be used to filter out possible false estimates before freezing and after the snow cover starts to affect the satellite retrievals.
  • Lumme, Erkka (2016)
    Magnetic field has a central role in many dynamical phenomena in the solar corona, and the accurate determination of the coronal magnetic field holds the key to solving a whole range of open research problems in solar physics. In particular, realistic estimates for the magnetic structure of Coronal Mass Ejections (CMEs) enable better understanding of the initiation mechanisms of these eruptions as well as more accurate forecasts of their space weather effects. Due to the lack of direct measurements of the coronal magnetic field the best way to study the field evolution is to use data-driven modelling, in which routinely available photospheric remote sensing measurements are used as a boundary condition. Magnetofrictional method (MFM) stands out from the variety of existing modelling approaches as a particularly promising method. The approach is computationally inexpensive but still has sufficient physical accuracy. The data-based input to the MFM is the photospheric electric field as the photospheric boundary condition. The determination of the photospheric electric field is a challenging inversion problem, in which the electric field is deduced from the available photospheric magnetic field and plasma velocity measurements. This thesis presents and discusses the state-of-the-art electric field inversion methods and the properties of the currently available photospheric measurements. The central outcome of the thesis project is the development and testing of a novel ELECTRICIT software toolkit that processes the photospheric magnetic field data and uses it to invert the photospheric electric field. The main motivation for the toolkit is the coronal modelling using MFM, but the processed magnetic field and electric field data products of the toolkit are usable also in other applications such as force-free extrapolations or high-resolution studies of photospheric evolution. This thesis presents the current state of the ELECTRICIT toolkit as well as the optimization and first tests of its functionality. The tests show that the toolkit can already in its current state produce photospheric electric field estimates to a reasonable accuracy, despite the fact that some of the state-of-the-art electric field inversion methods are yet to be implemented in the toolkit. Moreover, the optimal values of the free parameters in the currently implemented inversion methods are shown to be physically justifiable. The electric field inversions of the toolkit are also used to study other questions. It is shown that the large noise levels of the vector magnetograms in the quiet Sun cause the inverted electric field to be noise-dominated, and thus the magnetic field data from this region should not be considered in the inversion. Another aspect that is studied is the electric field inversion based only on line-of-sight (LOS) magnetograms, which is a considerable option due to much shorter cadence and better availability of the LOS data. The tests show that the inversions based on the LOS data have large errors when compared to the vector data based inversions. However, the results are shown to have reasonable consistency in the horizontal components of the electric field, when the region of interest is near the centre of the solar disk.
  • Rydman, Walter (Helsingin yliopistoUniversity of HelsinkiHelsingfors universitet, 2001)
  • Nissinen, Tuomas (2015)
    Particle Induced X-ray Emission (PIXE) is an ion beam analysis technique. In PIXE, atoms in the sample are excited when the sample is bombarded with protons, alpha particles, or heavy ions. X-rays are emitted when atoms in the sample de-excite. Each element has unique characteristic x-rays. In the spectrum, area of each peak is proportional to the elemental concentration in the sample. The existing PIXE set-up in the accelerator laboratory was upgraded to external beam PIXE to do in air measurements, because of need to analyse large amounts of archaeological samples. Different exit window set-ups were constructed and tested. The goal was to get maximum beam spot area with minimum beam energy loss in the exit window. The set-up enables the use of 100 nm thick Si3N4 exit window membranes and 4-mm-diameter beam spot area. For the measurements in the current work, a 500 nm thick Si3N4 membrane was used due to its higher durability. Current measurement can be difficult when doing PIXE in air because of ionization of air molecules in the beam's path and charge collection differences at sample surface. The set-up utilizes a beam profile monitor (BPM), which measures the current in vacuum prior to the exit window, and therefore is not affected by the current measurement difficulties in air. Along with the BPM, a current integrator was also used in the current measurements. Current integrator was used to collect the charge from the sample holder. These two methods together provided reliable way of current measurement. With the developed set-up, 166 pottery pieces from the neolithic stone age from different parts of Finland, Sweden and Estonia, were measured to determine their elemental concentrations for provenance research. AXIL software was used to analyse the spectra.
  • Byggmästar, Jesper (2016)
    Interatomic potentials are used to describe the motion of the individual atoms in atomistic simulations. An accurate treatment of the interatomic forces in a system of atoms requires heavy quantum mechanical calculations, which are not computationally feasible in large-scale simulations. Interatomic potentials are computationally more efficient analytical functions used for calculating the potential energy of a system of atoms, allowing simulations of larger systems or longer time scales than in quantum mechanical simulations. The interatomic potential functions must be fitted to known properties of the material the potential describes. Developing a potential for a specific material typically involves fitting a number of parameters included in the functional form, against a database of important material properties, such as cohesive, structural, and elastic properties of the relevant crystal structures. In the Tersoff-Albe formalism, the fitting is performed with a coordination-based approach, where structures in a wide range of coordination numbers are used in the fitting process. Including many differently coordinated structures in the fitting database is important to get good transferability to structures not considered in the fitting process. In this thesis, we review different types of widely used interatomic potentials, and develop an iron-oxygen potential in the Tersoff-Albe formalism. We discuss the strengths and weaknesses of the developed potential, as well the challenges faced in the fitting process. The potential was showed to successfully predict the energetics of various oxygen-vacancy defect clusters in iron, and the basic properties of the common iron oxide wüstite. The potential might therefore mainly be applicable to atomistic simulations involving oxygen-based defects in solid iron, such as irradiation or diffusion simulations.
  • Foreback, Benjamin (2018)
    This project has aimed to investigate and propose improvements to the methods used in the System for Integrated ModeLing of Atmospheric coMposition (SILAM) model for simulating biogenic volatile organic compound (BVOC) emissions. The goal is to study an option in SILAM to use the Model for Emission of Gases and Aerosols in Nature, Version 3 (MEGAN3) as an alternative to SILAM’s existing BVOC calculation algorithm, which is a more simplified approach. SILAM is an atmospheric chemical transport, dispersion, and deposition modelling system owned and continuously developed by the Finnish Meteorological Institute (FMI). The model’s most well-known use is in forecasting air quality in Europe and southeast Asia. Although traffic and other urban emissions are important when modelling air quality, accurate modelling of biogenic emissions is also very important when developing a comprehensive, high-quality regional and sub-regional scale model. One of the motivations of this project is that if BVOC emission simulation in SILAM were improved, the improvements would be passed into subsequent atmospheric chemistry algorithms which form the molecules responsible to produce secondary organic aerosols (SOA). SOA have significant impacts on local and regional weather, climate, and air quality. The development in this project will therefore offer the potential for future improvement of air quality forecasting in the SILAM model. Because SILAM requires meteorological forecast as input boundary conditions, this study used output generated by the Environment-High Resolution Limited Area Model (Enviro-HIRLAM), developed by the HIRLAM Consortium in collaboration with universities in Denmark, Finland, the Baltic States, Ukraine, Russia, Turkey, Kazakhstan, and Spain. Enviro-HIRLAM includes multiple aerosol modes, which account for the effects of aerosols in the meteorological forecast. Running SILAM with and without the aerosol effects included in the Enviro-HIRLAM meteorological output showed that aerosols likely caused a minor decrease in BVOC emission rate. This project has focused on the boreal forest of Hyytiälä, southern Finland, the site of the Station for Measuring Ecosystem-Atmosphere Relations - II (SMEAR-II, 61.847°N, 24.294°E) during a one day trial on July 14, 2010. After performing a test run over the Hyytiälä region in July 2010 for analysis, it was found that SILAM significantly underestimates BVOC emission rates of both isoprene and monoterpene, likely because of an oversimplified approach used in the model. The current approach in SILAM, called ‘Guenther Modified’, uses only a few equations from MEGAN and can be classified as a strongly simplified MEGAN version, with selected assumptions. It references a land cover classification map and lookup table, taking into account only three parameters (air temperature, month, and solar radiation) when performing the calculations. It does not take into account several other important parameters, which affect the BVOC emission rates. Based on qualitative analysis, this appears to be a simplified but limited approach. Therefore, based on these findings, the next step to improve SILAM simulations is to propose a full implementation of MEGAN as a replacement to the current logic in SILAM, which is to use land classification and a lookup table for BVOC emission estimates. MEGAN, which is a much more comprehensive model for simulating BVOC emissions from terrestrial ecosystems. MEGAN includes additional input parameters, such as Leaf Area Index (LAI), relative humidity, CO2 concentration, land cover, soil moisture, soil type, and canopy height. Furthermore, this study found that in the future, simulations involving BVOCs could also potentially be improved in SILAM by adding modern schemes for chemical reactions and SOA formation in future development of SILAM. After gaining in-depth understanding of the strengths and limitations of BVOC in the SILAM model, as practical result, some recommendations for improvements to the model are proposed.
  • Juva, Katriina (2016)
    The temperature and the salinity fields (i.e. the hydrography) of the Baltic Sea determine the density and hence the stratification and density depended circulation of the sea. These features are affected by the changes in the hydrologic circulation, most importantly by the changes in the atmospheric circulation and in the water exchange with the North Sea. The aims of this thesis are to study the hydrographical conditions and changes for the period 1971 - 2007 of the surface and bottom layers of the Baltic Sea and the model sensitivity to number of variables. The surface layer is well studied, but on the whole Baltic Sea scale, the bottom layer studies are rare in number. The halocline and thermocline depths are also included, since they provide information about the mixing. By combining the information from the surface and the bottom, the overview for the whole hydrographical state is provided. For the analysis, three hindcast simulations based on the three-dimensional North-Baltic Sea model are used. The simulations differ in the number of vertical layers, initial conditions and the strength of the bottom drag coefficient. The results show that the vertical stratification is weaker in model than what is observed in in-situ measurements. The simulations differ remarkably in the salinity level and in its evolution. On average, the salinity is decreasing 0.1 - 0.4 ppt per decade except on the deepest parts of the Baltic Proper. The temperature is increasing at the surface and above the permanent halocline on average 0.2 - 0.4 degree Celsius per decade. Large regional differences between the west and east coast of the basins were found. The bottom temperature increase up to 1 degree Celsius per decade was found in the eastern coast of the eastern Gotland Basin, whereas on the Swedish coast the changes are more moderate and during some months, opposite. On the opposite site of the Bothnian Sea and the Gotland Basin, monthly anomalies up to degree Celsius were found for autumn months. In the deeper layers, the temperature decreases 0.2 - 0.4 degree Celsius per decade. The study showed that the Baltic Sea is undergoing a rapid change. In order to get a more detailed view of the changes in stratification and circulation, the changes in density should be studied next.
  • Turkkila, Miikka (2018)
    Tämän työn tarkoituksena oli kehittää työkalut verkkokeskustelun auki purkamiseen. Työkalut mahdollistaisivat verkkokeskustelun nopean analysoimisen, jolloin sitä voitaisiin käyttää mm. verkko-oppimisen sosiaalisten ulottuvuuksien ymmärtämiseksi, kun dialogirakennetta verrataan teorioihin sosiaalisista rakenteista ja vuorovaikutuksista. Lisäksi työkaluja voidaan käyttää opetuksen tutkimuksen tukena ja verkkopohjaisen opetuksen kehittämisessä. Teoriataustana toimii tutkimus verkko-oppimisesta ja erityisenä aihealueena toimii tietokoneavusteinen kollaboratiivinen oppiminen. Termin alle jää laaja joukko eri opetustoimintaa, mutta tässä työssä termillä tarkoitetaan verkon välityksellä toteutettua ryhmäkeskustelua, jossa tavoitteena on oppiminen. Materiaalina toimi yhteensä 16 verkkokeskustelua, joissa neljä eri neljän hengen ryhmää keskustelivat neljästä eri kvanttifysiikkaan liittyvästä aiheesta. Tutkimusmenetelmänä käytettiin temaattista analyysiä keskustelun sisällön ja rakenteen varmistamiseksi. Tätä seurasi sosiaalinen verkostoanalyysi keskustelun rakenteen kautta. Tähän käytettiin erityisesti McDonnell at. al (2014) kehittämää tapaa käyttää triadisia, eli kolmikoista muodostuneita, rooleja verkon analysoimiseksi. Analyysejä varten keskustelut taulukoitiin s.e. jokaisesta viestistä merkittiin lähettäjän ja lähetysajan lisäksi kenelle viesti oli suunnattu ja mitä teemoja viesti piti sisällään. Tämän jälkeen kirjoitettiin Python-kieliset skriptit dialogirakenteen visualisoimiseksi ja niissä esiintyneiden roolien laskemiseksi. Tuloksiksi saatiin, että ryhmät keskustelivat tehtävänannon mukaisesti ja että verkkokeskustelun dialogirakenne voidaan esittää graafisesti niin sanottuna asynkronisena temporaalisena verkkona. Lisäksi keskusteluissa esiintyneet roolit voidaan laskea helposti ja esittää ns. lämpökartta-kuvana. Työn tavoitteet toteutuivat ja työssä kirjoitetut Python-skriptit lyhentävät merkittävästi verkkokeskustelun rakenteen analysoimista. Lisäksi tuloksia voidaan mahdollisesti käyttää ymmärtämään ryhmän sisäisiä sosiaalisia rakenteita. Tämä vaatii kuitenkin lisää työtä tässä käytetyn laskentamallin ja teorioiden yhdistämiseksi.
  • Martikainen, Laura (2017)
    Radiation detectors are devices used to detect ionizing radiation. They can be manufactured from different materials for different purposes. Chemical vapour deposition (CVD) diamond detectors are semiconductor radiation detectors manufactured from artificial diamond grown using the CVD method. The physical properties of diamond make diamond detectors fast and radiation hard, and hence they are a favourable option for precise timing measurements in harsh radiation environments. The work presented in this thesis was done as part of a detector upgrade project of the TOTEM experiment at the Large Hadron Collider of CERN, the European Organization for Nuclear Research. The upgrade program includes the development and the building of a timing detector system based on CVD diamond in order to include the capability to perform precise timing measurements of forward protons. A new I-V measurement setup was built for the purpose of quality assurance measurements of diamond crystals before their further processing to timing detectors. When the setup was operated, different problems were observed, including electrical discharging, instabilities of leakage currents and unexpectedly high leakage current levels. The undesired effects disappeared, when the electrical contact used for supplying bias voltage to the measured samples was modified. Results of both quality assurance and measurements for the setup development are presented.
  • Sairanen, Viljami (2013)
    Diffuusiokuvantaminen perustuu magneettikuvauslaitteen avulla mitattuun vesimolekyylien satunnaiseen lämpöliikkeeseen. Pehmytkudoksessa vesimolekyyli diffuntoituu noin 17 mikrometrin matkan 50 millisekunnin aikana ja diffuusiokuvantaminen on ainoa kliininen kuvantamismenetelmä, joka pystyy rekisteröimään näin pientä liikettä ei-invasiivisesti. Tutkimalla, missä suunnissa diffuntoituminen on voimakasta, voidaan paikantaa esimerkiksi valkeasta aivoaineesta hermoratojen reittejä. Tämä edellyttää käytännössä vähintään 20 diffuusiosuunnan kuvaamista, joiden pohjalta lasketaan diffuusion suuntaa ja suuruutta kuvaava diffuusiotensori kuva-alkiokohtaisesti. Menetelmä edellyttää nopeaa kuvausaikaa, jotta fysiologiset virtaukset tai potilaan liike eivät häiritse tasoltaan huomattavasti heikomman lämpöliikkeen rekisteröintiä. Nopea kuvaus puolestaan asettaa laiteteknisiä vaatimuksia gradienttikentille, joita ei anatomisessa T1- tai T2-painotetussa kuvantamisessa esiinny. Gradienttikelojen on pystyttävä toimimaan äärirajoillansa koko kuvauksen ajan, jotta useat peräkkäiset rekisteröinnit eri diffuusiosuunnissa ovat mahdollisia. Optimoinnissa käyttäjä ei voi vaikuttaa laiteteknisiin ratkaisuihin, mutta kuvausparametrien variointi on mahdollista. Edellytyksenä mielekkäälle optimoinnille on kuitenkin valita vertailtavat suureet, joiden perusteella voidaan sanoa, mitkä testatuista vaihtoehdoista paransivat kuvanlaatua. Diffuusiotensorikuvantamiseen (DTI) on ehdotettu laadunvalvontaprotokollaa, joka huomioi kuvausmenetelmän laitetekniset haasteet. Kyseinen julkaisu on ainoa, joka ottaa kantaa useimpiin DTI:n ongelmakohtiin ja on siten luonteva lähtökohta DTI-optimoinnille. Julkaisun menetelmässä tutkittiin DTI-sekvenssin tuottamaa signaalikohinasuhdetta, kuvaussekvenssistä ja indusoituvista pyörrevirroista johtuvia erilaisia geometrisia vääristymiä sekä diffuusiotensorista johdettuja FA- ja MD-arvoja. Työn ensimmäisessä vaiheessa valittiin kliiniseen DTI-sekvenssiin pohjautuva referenssisekvenssi, jota varioitiin yksi kuvausparametri kerrallaan. Muunnellut parametrit olivat kaikuaika, rinnakkaiskuvantamiskerroin, k-avaruuden keräyslaajuus, päämagneettikentän tasoitusalue sekä diffuusiopainotuskerroin eli b-arvo. Varioituja sekvenssejä oli yhteensä 10, joiden pohjalta valittiin kuvanlaatuun myönteisesti vaikuttaneet parametrit työn toiseen vaiheeseen, missä referenssisekvenssiä varioitiin usean parametrin suhteen. Lopputuloksena todettiin, että lyhin mahdollinen kaikuaika 55 ms ja suurin mahdollinen k-avaruuden kartoitusalueparametrin arvo 0,780 kasvattivat signaalikohinasuhdetta 13 %. Rinnakkaiskuvantamiskertoimen kasvattaminen kahdesta kahteen ja puoleen pienensi geometrisia vääristymiä kvalitatiivisessa arviossa, mutta heikensi signaalikohinasuhdetta referenssisekvenssiin verrattuna suurimmillaan vain 5 %. Päämagneettikentän tasoitusalueen valinnalla tai b-arvon pienentämisellä tuhannesta kahdeksaansataan ei havaittu olevan merkittävää vaikutusta kuvanlaadulle fantomitutkimusessa. Tulokset eivät poikenneet teoreettisista ennusteista, mutta toisaalta laiteteknisistä rajoituksista johtuen optimointi ei voi perustua pelkästään teoreettiseen arvioon oikeista parametrien arvoista. Työssä esitettyä menetelmää on mahdollista jatkossa käyttää myös muiden diffuusiopainotettujen sekvenssien optimoinnissa.
  • Naaranoja, Tiina (2014)
    The Large Hadron Collider (LHC) at CERN is currently being started up after a long shutdown. Another similar maintenance and upgrade period is due to take place in a few years. The luminosity and maximum beam energy will be increased after the shutdowns. Many upgrade projects stem from the increased demands from the changed environment and the opportunity of installation work during the shutdowns. The CMS GEM collaboration proposes to upgrade the muon system in CMS experiment by adding Gaseous Electron Multiplier (GEM) chambers. The new GEM-detectors need new Front-End electronics. There are two parallel development branches for mixed-signal ASICs; one comes with analog signal processing (VFAT3-chip) and another with analog and digital signal processing (GdSP-chip). This Thesis covers the development of the digital signal processing for the GdSP-chip. The design is described on algorithm level and with block diagrams. The signal originating in the triple GEM-detector sets special challenges on the signal processing. The time constant in the analog shaper is programmable due to irregularities in the GEM-signal. This in turn poses challenges for the digital signal processing. The pulse peaking time and signal bandwidth depend on the choice made for the time constant. The basic signal processing techniques and needs are common for many detectors. Most of the digital signal processing has shared requirements with an existing, well-tested Front-End chip. Time pick-off and trigger production was not included in these shared tasks. Several time pick-off methods were considered and compared with simulations. The simulations were performed first using Simulink running on Matlab and then on Cadence tools using Verilog hardware description language. Time resolution is an important attribute determined jointly by the detector and the signal processing. It is related to the probability to associate the measured pulse with the correct event. The effect of the different time pick-off methods on time resolution was compared with simulations. Only the most promising designs were developed further. Constant Fraction Discriminator and Pulse Recognition, the two most promising algorithms, were compared against analog Constant Fraction Discriminator and Time over Threshold time pick-off methods. The time resolutions obtained with noiseless signal were found to be comparable. At least in gas detector applications digital signal processing should not be ruled out of fear for deteriorated time resolution. The proposed digital signal processing chain for GdSP includes Baseline Correction, Digital Shaper, Integrator, Zero Suppression and Bunch Crossing Identification. The Baseline Correction includes options for using fixed baseline removal and moving average filter. In addition it contains a small memory, which can be used as test signal input or as look-up-table et cetera. Pole-zero cancellation is proposed for digital shaping. The integrator filters high frequency noise. The Constant Fraction Discriminator was found optimal for Bunch Crossing Identification.
  • Niemi, Lauri (2018)
    First-order phase transitions in the electroweak sector are an active subject of research as they contain ingredients for baryon number violation and gravitational-wave production. The electroweak phase transition in the Standard Model (SM) is of a crossover type, but first-order transitions are possible in scalar extensions of the SM, provided that interactions of the Higgs boson with the new particles are sufficiently strong. If such particles exist, they are expected to have observable signatures in future collider experiments. Conversely, studying the electroweak transition in theories beyond the SM can bring new insight on the cosmological implications of these models. Reliable estimates of the properties of the transition require non-perturbative approaches to quantum field theory due to infrared problems plaguing perturbative calculations at high temperatures. We discuss three-dimensional effective theories that are suitable for lattice simulations of the transition. These theories are constructed perturbatively by factorizing correlation functions so that contributions from light field modes driving the phase transition can be identified. Resummation of infrared divergences is naturally carried out in the construction procedure, and simulating the resulting effective theory on the lattice allows for a non-perturbative phase-transition study that is also free of infrared problems. Dimensionally-reduced theories can thus be used to probe the conditions under which perturbative treatments of the electroweak phase transition are valid. We apply the method to the SM augmented with a real $\text{SU}(2)$ triplet scalar and provide a detailed description of dimensional reduction of this model. Regions of a first-order transition in the parameter space are identified in the heavy triplet limit by the use of an effective theory for which lattice results are known. We provide a rough estimate for the accuracy of our results by considering higher-order operators that have been omitted from the effective theory and discuss future prospects for the three-dimensional approach.
  • Korolainen, Hanna (2018)
    All aerobic organisms require oxygen, which is taken into the lungs from the outside air during inhalation. From the lungs it travels all the way to the alveoli. The lung surfactant inside the alveoli consists of roughly 90% lipids and 10% proteins. Its primary functions are the reduction of the surface tension of the fluid inside the alveoli and its role as a part of the innate immune defense. The four most abundant proteins in the lung surfactant are called SP-A, SP-B, SP-C, and SP-D. The hydrophobic surfactant protein C (or SP-C) is the smallest of the four. It has a primarily α-helical structure with two palmitoylated cysteines attached to the N-terminal, helping SP-C to be bound to the surfactant membranes more tightly. The primary functions of SP-C include the transfer of lipids from lipid monolayers to multilayered structures, the enhancement of the adsorption of surface active molecules into the air-liquid interface, and the maintenance of the integrity of the multilayered structure. Lack of SP-C is known to lead to severe chronic respiratory pathologies. A potential dimerization motif has been suggested to be located near the C-terminus of SP-C. The purpose of this project was to study the possible dimerization of SP-C using the tools of molecular dynamics simulations. In this method Newton's second law is used to calculate the time evolution of the system. The resulted trajectory describes how the positions and velocities of the particles in the system change with time. Both coarse-grained (Martini force field) and atomistic (OPLS force field) models were used in the project. Dimerization was found to occur in coarse-grained simulations of 20 SP-Cs embedded into a bilayer: both aggregation and dissociation of the proteins were observed during a period of 1μs. Excessive aggregation of membrane proteins is known to be a problem when using the Martini force field. However, the dimers in the simulations were not irreversible, which indicates that the usage of the Martini force field was rather well justified. The dimerization motif found in the simulations is largely consistent with the one suggested by experiment. The dimers were also studied through atomistic simulations based on the fine-grained structures of coarse-grained simulations, and the atomistic simulations indicated the dimers to be stable. Altogether, the simulation results are in favor of the view that SP-C exists in a dimeric form. The function of the dimer structure remains to be clarified in future studies.