Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by department "Fysiikan laitos"

Sort by: Order: Results:

  • Pentikäinen, Pyry (2017)
    Turbulent mixing in the atmospheric boundary layer above the Hyytiälä forestry field station in southern Finland was studied with a combination of Doppler lidar and in-situ measurements on a 125 m tall mast. The intensity of turbulent mixing was derived from measurements of the vertical and horizontal wind speeds. Other meteorological data was included in the analysis to aid in the interpretation process. The methods applied to the data performed robustly under standard weather conditions, and thus can be used with high confidence to study more complex patterns of turbulent mixing. This includes two case studies of turbulent mixing under complex circumstances, one of which strongly implied a causal relationship between sudden changes in heat fluxes and the initiation of a nocturnal jet. The turbulent data from Doppler lidar Vertical Azimuth Display scans were separated into directional components to study the spatial variability of turbulent mixing. No significant spatial variability was observed during the daytime when strong turbulence consisting of large–scale turbulent eddies encompassing the whole boundary layer dominate. However, significant spatial differences were sometimes seen in the growth of the mixing layer during the morning, and stark spatial variability in turbulent mixing was detected on several summer nights. No single mechanism was conclusively shown to be responsible for the observed distribution of turbulence, but the night–time variability seemed to be connected to the presence of nocturnal jets. The area of the most intense nocturnal mixing is located in the vicinity of the nearby Station for Measuring Ecosystem - Atmosphere Relations II where comprehensive aerosol and canopy exchange research is performed. The observed nocturnal mixing may have implications for the conclusions resulting from the measurements performed at the station. The thermodynamic stability of the near–surface boundary layer was investigated using scaled potential temperature profiles measured at various altitudes on the 125 m measurement mast. There was good agreement with Doppler lidar observations, but due to calibration issues in the thermometers on the mast, quantitative results lack accuracy even after corrections were applied.
  • Dinku, Zerihun Megersa (2014)
    Coronal magnetic field governs most of the coronal activities. Despite of its importance in solar atmosphere, there is no accurate method of measuring the coronal magnetic field. The current measurement methods of coronal magnetic fields depend on extrapolation of photospheric magnetic fields. There are different models to study the global structure of coronal magnetic field. The most commonly used models are PFSS model and magnetohydrodynamic (MHD) model. In this thesis, we study the coronal magnetic field condition during major solar energetic particle (SEP) event of 23rd solar cycle by using the PFSS model. We use 114 SEP events observed by the SOHO/ERNE experiment in 1996-2010. In the beginning we identified 43 events that are relatively free from the disturbance caused by interplanetary coronal mass ejections (ICMEs). We examined these SEP events using IDL software developed by Lockheed Martin Solar and Astrophysics Lab (LMSAL). We produced plots of open coronal magnetic field of each events using SolarSoft. We also classified SEP events according to their number of connection points as events with; single connection point, double connection points and multiple connection points. Events with multiple connection points make up almost one third of the total events. These events show that the coronal magnetic connection are typically complicated and neighboring magnetic field lines in the solar wind can be magnetically connected to regions that are well separated in the low corona. We also found that the actual connection longitude (a longitude that takes in to account the coronal magnetic field) is most of the time is closer to the flare site associated with the event than the Parker spiral connection longitude. The Parker spiral longitudes, connection longitudes and flare longitudes are analyzed in detail by histograms. Finally, we chose two example events and analyzed them by using intensity-time profile of particles, plots of from LASCO CME catalog and plots produce by the SolarSoft. Based on our analysis we classified the example events into gradual and hybrid SEP events.
  • Sillanpää, Salla (2018)
    This study is an analysis of the different radiation parameters measured at SMEAR II station in Hyytiälä, Finland. The measurements include global radiation, diffuse shortwave radiation, reflected shortwave radiation, net radiation, photosynthetically active radiation (PAR), diffuse PAR, reflected PAR, ultraviolet-A (UV-A), ultraviolet-B (UV-B) radiation, incoming and outgoing infrared (IR) radiation and PAR below canopy measurements. Annual and inter-annual variations in different radiation parameters are investigated alongside dependencies and changes in relationships between different radiation variables. The changes in the different radiation parameters are compared to changes in the cloud occurrence at the measurement station. The cloud occurrence is based on cloud base height measurements from a ceilometer. The monthly median values of the parameters and ratios of parameters investigated in this study did not show any statistically significant trends. Annual and seasonal variation were detected for both individual parameters and ratios of parameters. These variations result from the changes in solar zenith angle, climatic conditions, cloudiness, aerosol load of the atmosphere and surface absorbance/emittance properties.
  • Talew, Eyob (2015)
    Solar Energetic particle (SEP) events are sudden and temporary increases in the cosmic ray fluxes which are related to solar flares or interplanetary shocks originating from the Sun. Solar energetic particle transport modelling requires a systematic understanding of the properties of the heliosphere. In the current modelling of particle transport in the heliosphere, it is assumed that the interplanetary medium has a steady-state solar wind and that the magnetic field in the heliosphere follows a Parker spiral. The presence of coronal mass ejections (CMEs) or interplanetary coronal mass ejections (ICMEs) in the heliosphere could cause interference with the solar wind and the magnetic field in the heliosphere. In this project we analyse two heliospheric modelling tools, called ENLIL and ENLIL-with-cone models, to see how accurately they could describe the heliosphere in the presence of Coronal mass ejections. To realize this goal we investigated the SEP events of the 23rd solar cycle. At first we investigated 114 SEP events recorded in this cycle for their relationships with CMEs and ICMEs. First, we investigated whether the SEP events could be related to ICME using time-window analysis and the position of the ICME when the SEP event was recorded. Using this process we identified 43 SEP events that are ICME-clean (not related to any ICME according to the two criteria we set). We then modelled the ICME-clean events using ENLIL modelling. We further analysed the ICME-clean events if they have any relation to CMEs. We narrowed our search only to SEP events that have three or less CME that could be associated with them. We then produced a plot for these SEP events to further study the relation between the SEP and the CMEs. We singled out the SEP event that was recorded on May 9, 1999 as a perfect candidate to be further analysed using ENLIL-with-cone model. This event is chosen because it is associated with a fast northward CME that expands into the western hemisphere and could possibly have accelerated the SEP towards Earth. When analysed with ENLIL-with-cone model, we found out that the CME interfered with the magnetic field lines that are directed towards Earth, thus providing a likely origin for the observed SEP event at 1AU. Though the contact between the CME and the Earthward field lines was very brief, it disrupted the Parker spiral structure of the magnetic field lines. From the statistical analysis of the ICMEs and CMEs during the large SEP events of the 23rd solar cycle, we deduced that the two assumptions used in the modelling of heliospheric SEP transport (steady-state solar wind and Parker spiral structure of the magnetic field) could not be made in typical cases. However, more advanced descriptions of the heliospheric field like ENLIL-with-cone could be utilized for modelling instead. From this project we concluded that a future heliospheric modelling tools need to encompass more factors than the two assumptions discussed above.
  • Siponen, Joula (2019)
    Changes in sea ice cover are one of the most visible signs of climate change. Long time series of thickness observations are needed to study climatological changes. The knowledge of sea ice thickness is also important for human activities in the Arctic. Need for better predictions of sea ice conditions is increasing due to opening of the Arctic sea routes for longer operational season among other things. Improving predictions requires high-quality observations. However, measuring sea ice thickness with adequate resolution, accuracy and coverage over the Arctic is a major challenge. In this thesis a new product of sea ice thickness, ESA-CCI (Climate Change Initiative), based on satellite radar altimetry is used for assessment of sea ice thickness from ocean reanalysis ORAS5. The CCI product combines two satellite missions, CryoSat-2 and ENVISAT, which leads to 15-year time series of sea ice thickness over the Arctic. The new CCI product performs well in validation of the reanalysis. Overall average difference (RMSE) between sea ice thickness in the CCI product and reanalysis is below 1 m but seasonal and interannual variation during the time series is from 0.5 m to 1.3 m. There are strong regional differences. The results of this thesis support previous research. Differences are a sum of reanalysis biases, such as incorrect physics or forcing, as well as uncertainties in satellite altimetry, such as the snow product used in thickness retrieval. Monthly separated time series of sea ice volume for the CCI coverage reveal years of extremely low volume and recovery during the season. The trends in sea ice volume are clearly negative. Monthly CCI trends are statistically significant. ORAS5 trends have larger interannual variability and therefore show no significance. The observed negative trends are connected to changes in both, atmospheric and oceanic forcing.
  • Rosta, Kawa (2017)
    Tässä työssä tarkastellaan annoksen ja pinta-alan tulon mittarin (DAP-mittarin) toimintaa ja käyttäytymistä pienissä säteilyannoksissa, jossa DAP-arvot ovat matalia. Lasten tutkimuksissa käytetään pieniä kuvausarvoja, jonka seurauksena lapsipotilaaseen kohdistuu matalia annoksia. Lasten thorax-tutkimuksissa potilaaseen kohdistuva keskimääräinen DAP-arvo on 19 mGy x cm^2. DAP-arvon tarkkuus matalissa annoksissa on tärkeä, sillä lapsuudessa saatu säteilyaltistus aiheuttaa suuremman riskin kuin vastaava altistus aikuisiässä. Lapset ovat säteilysuojelun kannalta erityisasemassa ja lasten tutkimusten oikeutusharkintaan ja optimointiin tulee kiinnittää erityistä huomiota. Tutkimuksessa DAP-mittarin tarkkuutta matalissa annoksissa tarkasteltiin käyttäen pinta-ala menetelmän kalibrointia. Kalibrointi tapahtui siten, että DAP-mittareita käytettiin kenttämittareina ja Raysafe Xi-annosmittaria vertailumittarina. Toisin sanoen DAP-mittarista saatuja arvoja tarkasteltiin vertaamalla niitä Raysafe Xi-annosmittarin arvoihin. DAP-mittari on kiinnitetty röntgenputken eteen ja pinta-ala menetelmässä annosmittari asetetaan röntgenputken alapuolelle säteilykeilaa vasten. Tällöin kuvaamisessa molemmat mittarit altistuvat säteilylle samanaikaisesti. Tulokseksi saatiin, että DAP-mittarit ovat kalibroitu korkean kuvausjännitteen, sähkömäärän ja ilman lisäsuodatuksen avulla, eikä kalibroinnissa ole otettu huomioon matalia annoksia. Tutkimalla DAP-mittarin tarkkuutta matalissa annoksissa huomattiin, että DAP-mittaria koskeva laitevaatimus, jossa näyttämä saa poiketa oikeasta arvosta enintään 25 %, ei toteudu AGFA DX-D600 ja FUJI FDR Acselerate röntgenlaitteella DAP-arvon ollessa 0-4 mGy x cm^2 välillä. Tällöin näiden kahden röntgenlaitteiden DAP-mittareista saadut DAP-arvot eivät ole luotettavia DAP-arvojen ollessa alle 4 mGy times cm^2.
  • Savolainen, Juha (2018)
    A new theoretical model for the structure of glasses is presented and used to study the boson peak found in glasses. The model is based on a simple lattice model familiar from crystals, which is disordered using techniques from noncommutative fluid models. First classical crystal models and concepts of lattice vibrations are reviewed, focusing on acoustic and optical waves, the density of vibrational states, heat capacity and the Debye model. Then noncommutative fluid theory and noncommutative geometry are shortly introduced to show the connection to fluids in our model. After these introductions, the glass model is formulated and used to calculate the dispersion relations, the density of vibrational states and the heat capacity. The density of states has a Van Hove singularity at low frequencies, which generates the boson peak seen in experiments. The glass is found to have both acoustic and optical waves, and the acoustic waves are located very close to the frequency of the Van Hove singularity, which hints that the boson peak should be related to acoustic waves.
  • Pensala, Tuukka (2018)
    We review basics of quantum field theories (QFT) and lattice field theories (LFT). We present, evaluate, and compare possible solutions for creating portable high performance LFT simulation programs. We choose one of the possible solutions, creating our own programming language, discuss its features the our prototype of it. Last we present improvement ideas to the implemented solution.
  • Haavisto, Noora (2017)
    Hydrografian seuranta Itämerellä on hoidettu perinteisesti tutkimusaluksilla, joiden operoiminen on kallista ja joilla päästään seurantapisteille tekemään luotauksia vain muutaman kerran vuodessa. Ilmatieteen laitos on vuodesta 2011 testannut ja operoinut automaattisia profiloivia Argo-poijuja Selkämerellä. Poijut tarjoavat uudenlaisen menetelmän hydrografian ja syvien virtausten mittaamiseen, mutta Itämeren mataluus ja vähäsuolainen vesi aiheuttavat haasteita niiden operointiin. Tässä tutkielmassa käsitellään Ilmatieteen laitoksen Argo-poijujen viiden ensimmäisen operatiivisen vuoden (2012--2016) profiilimittauksia. Tärkeimpiä tutkimuskysymyksiä ovat, mitä poijujen data kertoo Selkämerestä itsenäisenä datasettinä, missä poijut toimivat paremmin suhteessa perinteiseen seurantaan ja missä eivät, ja miten poijuja kannattaisi käyttää osana olemassa olevaa havaintoverkkoa. Argo-poijujen datan perusteella lasketut keskimääräiset lämpötilan ja suolaisuuden arvot Selkämerellä olivat lähellä kirjallisuudessa esitettyjä, joskin pohjakerroksen suolaisuus oli noin 0,5\,g\,kg$^{-1}$ matalampi. Tämän arveltiin johtuvan osin siitä, etteivät kaikki profiilit ulottuneet pohjaan asti, ja osittain lyhyestä aikasarjasta verrattuna kirjallisuuden klimatologisiin keskiarvoihin. Vuosi 2014 havaittiin poikkeukselliseksi sekä pintalämpötilan että pohjakerroksen suolaisuuden osalta. Korkean suolaisuuden todettiin olevan todennäköisesti merkki varsinaiselta Itämereltä Selkämerelle tulleesta tavallista suuremmasta vesimäärästä. Argo-poijut tuottavat profiileja vesipatsaan hydrografiasta huomattavasti tiheämmin, kuin perinteiset seurantamatkat, joten niiden vahvuus on laivaseurantaan verrattuna lyhytaikaisten muutosten seurannassa. Virtauksia on myös mahdollista arvioida Argo-poijujen ajelehtimisnopeuden perusteella. Selkämeren syvänteelle lasketut virtausnopeudet olivat lähellä kirjallisuudessa esitettyjä (1,4--4,8\,cm\,s$^{-1}$). Virtausnopeudet ovat ennemminkin hydrografiamittausten sivutuote, kun Argo-poijujen käyttötarkoitus, nopeuksien arvioinnissa on paljon virhelähteitä, jotka yhdistettynä matalaan merialueeseen hankaloittavat tulosten tulkintaa. Kaiken kaikkiaan Argo-poijut todettiin Selkämerellä toimiviksi merialueen haaseista huolimatta. Poijut tuottavat profiilidataa tiheydellä, joka on aiemmin ollut saavuttamattomissa laivaseurannan kalleuden takia. Ensimmäisinä operatiivisina vuosina Selkämeren Argo-poijut ovat tuottaneet syvännealueen profiilidatasta jopa 80\,\% (verrattuna HELCOMin seuranta-asemilla tehtyihin mittauksiin). Tulevaisuudessa olisi kiinnostavaa kokeilla poijujen ohjautumista esimerkiksi Ahvenanmerellä, ja tutkia, miten tällä hetkellä (10/2017) Selkämerellä yhtäaikaa mittaavat kolme poijua kuvaavat koko Selkämerta ja olisiko tämä sopiva määrä poijuja yhtäaikaisessa operatiivisessa seurannassa.
  • Lindgren, Elisa (2015)
    Keväällä auringonsäteilyn lisääntyessä alkaa jääpeitteisillä järvillä sulamiskausi, jolloin ensin lumi ja sen jälkeen jääkansi sulavat. Lumeton jää läpäisee jonkin verran auringonsäteilyn näkyvää valoa, joka lämmittää jään alla olevaa vettä ja aiheuttaa siten epästabiilin tiheyskerrostuneisuuden sekä konvektiivisen sekoittumisen alkamisen. Koska jäätyvien järvien jääpeitekauden kesto on lyhentynyt edellisen vuosisadan aikana, mikä voi vaikuttaa esimerkiksi arktisten maa-alueiden ilmastoon ja järvien ekologiaan, tarvitaan jäidenlähdön ja järvien sulamiskauden fysikaalisten prosessien tutkimusta, jotta veden, jään ja ilmakehän välinen vuorovaikutus ymmärrettäisiin ja voitaisiin mallintaa paremmin. Kilpisjärvi (69 03'N 20 50'E) on Suomen Käsivarren Lapissa sijaitseva arktinen tundra-järvi. Sen jäätalvi kestää tavallisesti marraskuusta kesäkuuhun, ja järven jää saavuttaa vuosittain noin 90 cm paksuuden. Keväällä 2013 tehtiin Kilpisjärvelle kenttämatka, jonka tavoitteena oli tutkia arktisten järvien fysikaalisia prosesseja sulamiskaudella, ja jonka aikana kerättyjä mittauksia jään ominaisuuksista sekä veden lämpötilasta analysoidaan tässä tutkielmassa. Kilpisjärvellä havainnoitiin jään sulamista ja mitattiin jään läpi tunkeutuvan PAR-säteilyn määrää sekä tutkittiin järven jäästä otetusta näytteestä kiderakennetta. Järveen asennetut termistoriketjut mittasivat veden lämpötilan kehitystä järven eri syvyyksillä, ja CTD-luotauksilla seurattiin lämpötilan alueellista vaihtelua järven eri osissa sekä säteilyn vaimenemista vesirungossa. Helsingin Yliopiston Kilpisjärven biologiselta asemalta ja Ilmatieteenlaitoksen Kilpisjärven kyläkeskuksen sääasemalta saatiin lisäksi säätietoja, joiden perusteella laskettiin jään lämpötase. Kenttäjakson aikana 25.5.-4.6. Kilpisjärvellä vallitsivat poikkeukselliset olosuhteet, sillä korkeimmillaan ilman lämpötila kohosi +25 Celsiusasteeseen. Järven jää suli nopeasti 4-5 cm vuorokaudessa, mikä johti aikaiseen jäidenlähtöön 3. kesäkuuta. Vaikka lämpötasetta hallitsi auringonsäteily, olivat myös havaittavan lämmön vuo ja pitkäaaltoisen säteilyn netto jäätä lämmittäviä tekijöitä. Pelkästään kohvajäästä koostuvan jään läpäisykyky oli korkea, 0,6-0,9, ja valon vaimenemiskertoimelle saatiin arvoja väliltä 0,2 m−1 ja 0,8 m−1. Jään alla irradianssin taso oli keskimäärin 155 W m−2, ja tämä lämmitysteho aiheutti veden lämpötilan nopean kasvun jään alla jo ennen jäidenlähtöä. Alueellinen vaihtelu oli kuitenkin selkeä, sillä litoraalialueet lämpenivät nopeammin kuin pelagiaalialueet. Tutkimus osoitti, että lähellä jäidenlähtöä auringonsäteily on merkittävin vettä lämmittävä tekijä, jolloin sen ajama konvektio aiheuttaa veden tehokkaan sekoittumisen. Jään läpäisseen säteilyn määrä ja veden lämpeneminen vaihtelevat kuitenkin todennäköisesti järven eri osien välillä epähomogeenisen jääpeitteen takia, mistä voi seurata tiheyserojen ajamien konvektiivisten solujen syntyminen ja siten monimutkainen virtausrakenne jään alla. Tästä ei saatu havaintoja käytetyillä menetelmillä, joten vastaavissa myöhemmissä tutkimuksissa olisi mittaukset suunniteltava kattamaan laajempi alue järveä.
  • Lahtinen, Jyri (2017)
    Vacuum arc electrical breakdowns cause problems in many appliances operating in high electric field, such as the Compact Linear Collider (CLIC), a proposed next-generation particle accelerator in CERN. The breakdown phenomenon is not well-understood despite decades of research. Diffusive mass transport in metallic surfaces under electric fields is hypothesised to play a role in the events leading to breakdowns. Kinetic Monte Carlo (KMC) is a well established simulation method for studying diffusion. The weakness of KMC is that it requires knowledge of the rates of all processes that can happen during simulation: in the case of diffusion, these are migration events of mobile objects. The rates can be found from migration barriers, which in turn can be calculated using various methods. In this thesis, the parametrisation scheme of an existing atomistic KMC model for studying Cu surface diffusion was improved. In this model, the migration barrier is a function of the local environment of the migrating atom. The barriers in different environments were calculated with the nudged elastic band (NEB) method. It is an accurate way of finding barriers, but too expensive to be used for calculating them all in the improved parametrisation scheme. This problem was treated with a multidisciplinary approach of training an artificial neural network (ANN) to predict the barriers, using a limited dataset calculated with the NEB method. Good prediction performance was achieved for the case of stable migration processes on smooth surfaces, and the predictor function was found to be sufficiently fast to be called during KMC runtime.
  • Sarkar, Subhojit (2015)
    The SM, conceptually and phenomenologically fails to incorporate and explain few fundamental problems of particle physics and cosmology, such as a viable dark matter candidate, mechanism for inflation, neutrino masses, the hierarchy problem etc. In addition, the recent discovery of the 125 GeV Higgs boson and the top quark mass favor the metastablility of the electroweak vacuum, implying the Higgs boson is trapped in a false vacuum. In this thesis we propose the simplest extension of the SM by adding an extra degree of freedom, a scalar singlet. The singlet can mix with the Higgs field via the Higgs portal, and as a result we obtain two scalar mass eigenstates (Higgs-like and singlet-like). We identify the lighter mass eigenstate with the 125 GeV SM Higgs boson. Due to the mixing, the SM Higgs quartic coupling receives a finite tree level correction which can make the electroweak vacuum completely stable. We then study the stability bounds on the tree level parameters and determine the allowed mass region of the heavier mass eigenstate (or singlet-like) for range of mixing angles where all the bounds are satisfied. We also obtain regions of parameter space for different signs of the Higgs portal coupling. In the allowed region, the singlet-like state can decay into two Higgs-like states. We find the corresponding decay rate to be substantial. Finally, we review various applications of the singlet extension, most notably, to the problem of dark matter and inflation.
  • Säppi, Matias (2016)
    This thesis discusses various topics related to the study of strongly coupled quantum field theories at finite density or, equivalently, finite chemical potential. In particular, the focus is on the theory of strong interactions, quantum chromodynamics (QCD). Finite-density QCD is important in the description of numerous physical systems such as neutron stars or heavy-ion collisions, a brief overview of which are given, alongside with the QCD phase diagram as motivational examples. After this, the general construction of a Lagrangian finite-density quantum field theory is described. In contrast with the zero-density setting, a finite-density field theory does not admit a simple description on the lattice, rendering this standard approach to strongly coupled theories impractical due to the so-called sign problem. Various attempts of addressing the sign problem are reviewed, and the so-called Lefschetz thimble approach and the complex Langevin method are discussed in detail. Some mathematical details related to these approaches are elaborated in the appendices. Due to the impracticality of lattice methods, a perturbative description becomes more important at finite density. Perturbative finite-density QCD and methods useful in practical calculations are discussed. Amongst them is a detailed proof of a set of so-called 'cutting rules' that apply to zero-temperature finite-density quantum field theory, an example computation using these rules as well as a discussion on various divergences and their relation to zero-density theory.
  • Kurppa, Mona (2016)
    This thesis is a case study of the impact of urban planning on local air quality along a planned city boulevard in western Helsinki. The aim of this study is to analyse ventilation and dispersion of traffic-related air pollutants inside street canyons and courtyards in four alternative city block design versions. In particular, whether the format and variation of building height can improve air quality in future planned neighbourhoods and as such, help improve the decision-making process in city planning. The study employs a large-eddy simulation (LES) model PALM with embedded Lagrangian stochastic particle and canopy models to simulate transport of pollutants (air parcels) and the aerodynamic impact of street trees and a surrounding forest on pollutant transport. The embedded models are revised by the author to take into account the horizontal heterogeneity of the particle sources and plant canopy. Furthermore, three-dimensional two-way self-nesting is used for the first time in PALM in this study. High-resolution simulations are conducted over a real urban topography under two contrasting meteorological conditions with neutral and stable stratification and south-western and eastern wind direction, respectively. The comparison of the different boulevard-design versions is based on analysing the temporal mean particle concentrations, the turbulent vertical particle flux densities and the particle dilution rate. Differences in flux densities between the versions show a strong dependence on urban morphology whereas the advection-related dilution rate depends on the volume of unblocked streamwise street canyons. A suggestive ranking of the versions is performed based on the horizontal mean values of the analysis measures (separately for the boulevard, the other street canyons, the courtyards and the surroundings). Considering both meteorological conditions, the design version with variable building height and short canyons along the boulevard outperforms the other design versions based on the ranking. This is especially pronounced in stable conditions. Surprisingly, variability in building shape did not induce clear improvements in ventilation. This is the first high-resolution LES study conducted over a real urban topography applying sophisticated measures to assess pollutant dispersion and ventilation inside street canyons and courtyards.
  • Kiuru, Risto (2016)
    Physical and mechanical properties of drill core specimens were determined as a part of investigations into excavation damage in the dedicated study area in ONK-TKU-3620. The main goal of this study was to find indicators of excavation damage in the form of anomalous physical properties linked to increased porosity or lower mechanical strength. Geophysical indicators are desired for their ease, speed and cost-effectiveness. The secondary goal was to find associations between dynamic and static elastic properties, to allow estimation of rock mechanical properties using geophysical measurements. The parameters most sensitive to the presence of (saline) pore space fluid showed depth dependencies. Resistivity showed abnormally low values in the first 0.2 m, and an increase with depth in the first 0.7 m from the study area surface. S-velocity, shear impedance, shear modulus and Young's modulus all showed abnormally low values in the first 0.2 m from the study area surface. In addition to clear depth dependencies, other indicators of excavation damage were found. Specimens in the first 0.7 m from the study area surface showed increased proportion of high (> 0.5 %) porosity values. Combinations of high porosity/shallow depth, low resistivity/shallow depth, high porosity/low resistivity, low IP/shallow depth and low IP/high porosity also seem to separate anomalous specimens. S-velocity, P/S -ratio, Poisson's ratio and all three impedances in respect to depth separated anomalous specimens. Abnormally high S-velocity in respect to other elastic properties also seemed to separate anomalous specimens. On one of the anomalous specimens, the presence of an EDZ feature was confirmed by Posiva geologist. This specimen could be identified based on S-velocity, P/S -ratio, Poisson's ratio and all three impedances. Best indicators for excavation damage based on this study would appear to be resistivity, S-velocity, shear impedance, shear modulus and Young's modulus. Most of the other elastic parameters in conjunction with other parameters could be used to identify anomalous specimens. The results support the use of electrical and seismic methods to identify excavation damage. Estimation of static elastic properties based on dynamic elastic properties does not appear possible based on this study. The views and opinions presented here are those of the author, and do not necessarily reflect the views of Posiva.
  • Siltala, Lauri (2016)
    Estimates for asteroid masses are based on gravitational perturbations on the orbits of other objects such as Mars, spacecraft, or other asteroids and/or their satellites. In the case of asteroid-asteroid perturbations, this leads to a 13-dimensional inverse problem where the aim is to derive the mass of the perturbing asteroid and six orbital elements for both the perturbing asteroid and the test asteroid using astrometric observations. We have developed and implemented three different mass estimation algorithms utilizing asteroid-asteroid perturbations into the OpenOrb asteroid-orbit-computation software: the very rough 'marching' approximation, in which the asteroid orbits are fixed at a given epoch, reducing the problem to a one-dimensional estimation of the mass, an implementation of the Nelder-Mead simplex method, and most significantly, a Markov-Chain Monte Carlo (MCMC) approach. We introduce each of these algorithms with particular focus on the MCMC algorithm, and present example results for both synthetic and real data. Our results agree with the published mass estimates, but suggest that the published uncertainties may be misleading as a consequence of using linearized mass-estimation methods. Finally, we discuss remaining challenges with the algorithms as well as future plans.
  • Rytman, Kristiina (2018)
    The purpose of this study is to research what factors affect a student’s likelihood of successfully reaching his or her goal of becoming a scientist or, more specifically, a physicist. Academic achievement has long been associated with intelligence. This restricted view has not been comprehensive enough and has lacked the study of how noncognitive personality traits relate to success. In this study factors relating to skills and talent in physics, as well as to personality, will be analysed. Their relationship to a physics attitude assessment test will be investigated as well. Three questionnaires were used to gather data for this study: the Grit Survey, the Colorado Learning Attitudes about Science Survey (CLASS) and the Force Concept Inventory (FCI). Grit measures a person’s perseverance of effort and consistency of interest. It is a personality trait that is thought to predict success better than previous personality constructs have. CLASS measures expert-like thinking of students in a physics context. In this study CLASS is used as a predictor of academic achievement. FCI measures a student’s conceptual understanding of the force concept in Newtonian physics. In this study FCI is used as an evaluator of a student’s skills and talent in physics. The sample studied in this thesis consisted of 71 students attending a first-year physics course at the University of Helsinki. 43 of the participants were male and 28 female. Most of the students were students that had decided to major in physics. Results from correlation analysis between grit and CLASS show that no significant relationship was found between these two factors, with a correlation coefficient of only r=0.181 (p=0.131). FCI and CLASS did show a correlation of r=0.312. Correlations between the factors of grit and CLASS were also analysed in this study. Grit did not correlate with CLASS, a predictor of academic achievement. FCI, an evaluator of skill and talent in physics did correlate with CLASS. Grit as a whole does not seem to relate to academic achievement. But its dimension of effort does. Grit effort correlated with CLASS with a correlation coefficient of r=0.241 (p=0.043). The two dimensions of grit seem to measure different things and it is important to be able to analyse them separately.
  • Åhlgren, Elina Harriet (2012)
    Graphene is the ultimately thin membrane composed of carbon atoms, for which future possibilities vary from desalinating sea water to fast electronics. When studying the properties of this material, molecular dynamics has proven to be a reliable way to simulate the effects of ion irradiation of graphene. As ion beam irradiation can be used to introduce defects into a membrane, it can also be used to add substitutional impurities and adatoms into the structure. In the first study introduced in this thesis, I presented results of doping graphene with boron and nitrogen. The most important message of this study was that doping of graphene with ion beam is possible and can be applied not only to bulk targets but also to a only one atomic layer thick sheet of carbon atoms. Another important result was that different defect types have characteristic energy ranges that differ from each other. Because of this, it is possible to control the defect types created during the irradiation by varying the ion energy. The optimum energy for creating a substitution for N ion is at about 50 eV (55%) and for B ion it is ca. 40% at about the same energy. Single vacancies are most probably created at an energy of about 125 eV for N (55%) and for B at ca. 180 eV (35%). For double vacancies, the maximum probabilities are roughly at 110 eV for N (16%) and at 70 eV for B (6%). The probabilities for adatoms are the highest at very small energies. A one atom thick graphene membrane is reportedly impermeable to standard gases. Hence, graphene's selectivity for gas molecules trying to pass through the membrane is determined only by the size of the defects and vacancies in the membrane. Gas separation using graphene membranes requires knowledge of the properties of defected graphene structures. In this thesis, I presented results of the accumulation of damage on graphene by ion irradiation using MD simulations. According to our results, graphene can withstand up to 35% vacancy concentrations without breakage of the material. Also, a simple model was introduced to predict the influence of the irradiation during the experiments. In addition to the specific results regarding ion irradiation manipulation of graphene, this work shows that MD is a valuable tool for material research, providing information on atomic scale rarely accessible for experimental research, e.g., during irradiation. Using realistic interatomic potentials MD provides a computational microscope helping to understand how materials behave at the atomic level.
  • Puonti, Oula (2012)
    Magnetic resonance imaging (MRI) provides spatially accurate, three dimensional structural images of the human brain in a non-invasive way. This allows us to study the structure and function of the brain by analysing the shapes and sizes of different brain structures in an MRI image. Morphometric changes in different brain structures are associated with many neurological and psychiatric disorders, for example Alzheimer's disease. Tracking these changes automatically using automated segmentation methods would aid in diagnosing a particular brain disease and follow its progression. In this thesis we present a method for automatic segmentation of MRI brain scans using parametric generative models and Bayesian inference. Our method segments a given MRI scan to 41 different structures including for example hippocampus, thalamus and ventricles. In contrast to the current state-of-the-art methods in whole-brain segmentation, our method does not pose any constraints on the MRI scanning protocol used to acquire the images. Our model is based on two parts: the first part is a labeling model that models the anatomy of the brain and the second part is an imaging model that relates the label images to intensity images. Using these models and Bayesian inference we can find the most probable segmentation of a given MRI scan. We show how to train the labeling model using manual segmentations performed by experts and how to find optimal imaging model parameters using expectation-maximization (EM) optimizer. We compare our automated segmentations against expert segmentations by means of Dice scores and point out places for improvement. We then extend the labeling and imaging models and show, using a database consisting of MRI scans of 30 subjects, that the new models improve the segmentations compared to the original models. Finally we compare our method against the current state-of-the-art segmentation methods. The results show that the new models are an improvement over the old ones, and compare fairly well against other automated segmentation methods. This is encouraging, because there is still room for improvement in our models. The labeling model was trained using only nine expert segmentations, which is quite a small amount, and the automated segmentations should improve as the number of training samples grows. The upside of our method is that it is fast and generalizes straightforwardly to MRI images with varying contrast properties.
  • Suoniemi, Sanni (2014)
    Fysiikka mielletään helposti teoreettiseksi oppiaineeksi, vaikka se onkin lähtökohdiltaan kokeellinen tieteenala. Kokeellisen toiminnan järjestäminen luokkahuoneessa voi kuitenkin olla haastavaa, etenkin modernin fysiikan aihepiirissä. Avoin hiukkasfysiikan tutkimusdata mahdollistaa kokeellisuuden ja aidon tutkimuksen tekemisen hiukkasfysiikan parissa. Hyödyntämällä tutkivan oppimisen pedagogiikkaa voidaan tiedonkäsittelytaidot, yhteistyötaidot ja hiukkasfysiikka yhdistää luokkahuoneessa toteutettavaksi toiminnaksi. Tutkielma toteutettiin kehittämistutkimuksena, joka muodostui kahdesta kehittämissyklistä. Tutkimuksen kehittämisvaiheisiin sisältyivät Masterclass-tapahtuman yhteydessä suoritettu tapaustutkimus sekä fysiikan lukio-opettajille suunnattu kyselytutkimus. Kyselytutkimuksen avulla pyrittiin selvittämään fysiikan lukio-opettajien suhtautumista avoimen hiukkasfysiikan tutkimusdatan opetuskäyttöä kohtaan. Opettajakyselyn tulosten perusteella avoin hiukkasfysiikan tutkimusdata soveltuisi hyvin lukio-opiskeluun. Aihe-alue kiinnosti selkeästi opettajia ja aiheen uskottiin kiinnostavan myös opiskelijoita. Suurin osa (80,3%) olisi valmis hyödyntämään opetuksessaan avointa hiukkasfysiikan tutkimusdataa. Opetusta kehitettäessä tulisi opettajien kokemusten perusteella huomioida ajankäytölliset rajoitteet, tietotekniset rajoitteet, opiskelijoiden erilaiset taito-, tieto- ja motivaatiotasot, opettajan tietotason asettamat haasteet, materiaalin hyvä ohjeistus sekä opetuksen keskittyminen ydinainekseen. Lisäkoulutukselle ja etenkin suomenkieliselle tukimateriaalille olisi tutkimuksen perusteella tarvetta. Avoin hiukkasfysiikan tutkimusdata soveltuisi opettajien mielestä hyödynnettäväksi useallakin eri fysiikan kurssilla ja muutamalla pitkän matematiikan kurssilla. Aikataulullisten resurssien rajallisuus rajoittaisi aiheen parissa käytettävän ajan Aine ja säteily -kurssilla vajaaseen kahteen oppituntiin. Fysiikan koulukohtaisilla kursseilla opettajat olisivat valmiita käyttämään aiheen parissa jopa noin kahdeksan oppituntia. Riittävillä tuki- ja ohjaustoimilla sekä suomenkielisen, luokkatilanteeseen soveltuvan materiaalin kehittämisellä avointa hiukkasfysiikan tutkimusdataa olisi mahdollista hyödyntää laajemminkin lukioiden matemaattis-luonnontieteellisillä kursseilla. Aihe voisi osaltaan edesauttaa fysiikan opiskelijoiden lukumäärän lisäämistä sekä opiskelijoiden sukupuolten välisen jakauman tasoittamista. Tutkimuksen tuloksena syntyi visioita avoimen hiukkasfysiikan tutkimusdatan hyödyntämisestä opetuksessa. Kehittämisen tuloksena muotoiltiin didaktinen rekonstruktio avoimen hiukkasfysiikan tutkimusdatan opetuskäytöllisestä hyödyntämisestä käyttäen esimerkkinä avointa, hiukkastutkimuskeskus Cernin CMS-kokeesta saatavaa tutkimusdataa. Tutkimuksen kautta saatiin myös tietoa fysiikan lukio-opettajien suhtautumisesta hiukkasfysiikkaan, tietoa hiukkasfysiikan opetuksesta lukioissa sekä tietoa opiskelijoiden suhtautumisesta informaaliin hiukkasfysiikan opetukseen Masterclass –tapahtuman yhteydessä.