Browsing by department "Institutionen för fysik"
Now showing items 21-40 of 372
-
(2014)Coronal magnetic field governs most of the coronal activities. Despite of its importance in solar atmosphere, there is no accurate method of measuring the coronal magnetic field. The current measurement methods of coronal magnetic fields depend on extrapolation of photospheric magnetic fields. There are different models to study the global structure of coronal magnetic field. The most commonly used models are PFSS model and magnetohydrodynamic (MHD) model. In this thesis, we study the coronal magnetic field condition during major solar energetic particle (SEP) event of 23rd solar cycle by using the PFSS model. We use 114 SEP events observed by the SOHO/ERNE experiment in 1996-2010. In the beginning we identified 43 events that are relatively free from the disturbance caused by interplanetary coronal mass ejections (ICMEs). We examined these SEP events using IDL software developed by Lockheed Martin Solar and Astrophysics Lab (LMSAL). We produced plots of open coronal magnetic field of each events using SolarSoft. We also classified SEP events according to their number of connection points as events with; single connection point, double connection points and multiple connection points. Events with multiple connection points make up almost one third of the total events. These events show that the coronal magnetic connection are typically complicated and neighboring magnetic field lines in the solar wind can be magnetically connected to regions that are well separated in the low corona. We also found that the actual connection longitude (a longitude that takes in to account the coronal magnetic field) is most of the time is closer to the flare site associated with the event than the Parker spiral connection longitude. The Parker spiral longitudes, connection longitudes and flare longitudes are analyzed in detail by histograms. Finally, we chose two example events and analyzed them by using intensity-time profile of particles, plots of from LASCO CME catalog and plots produce by the SolarSoft. Based on our analysis we classified the example events into gradual and hybrid SEP events.
-
(2018)This study is an analysis of the different radiation parameters measured at SMEAR II station in Hyytiälä, Finland. The measurements include global radiation, diffuse shortwave radiation, reflected shortwave radiation, net radiation, photosynthetically active radiation (PAR), diffuse PAR, reflected PAR, ultraviolet-A (UV-A), ultraviolet-B (UV-B) radiation, incoming and outgoing infrared (IR) radiation and PAR below canopy measurements. Annual and inter-annual variations in different radiation parameters are investigated alongside dependencies and changes in relationships between different radiation variables. The changes in the different radiation parameters are compared to changes in the cloud occurrence at the measurement station. The cloud occurrence is based on cloud base height measurements from a ceilometer. The monthly median values of the parameters and ratios of parameters investigated in this study did not show any statistically significant trends. Annual and seasonal variation were detected for both individual parameters and ratios of parameters. These variations result from the changes in solar zenith angle, climatic conditions, cloudiness, aerosol load of the atmosphere and surface absorbance/emittance properties.
-
(2015)Solar Energetic particle (SEP) events are sudden and temporary increases in the cosmic ray fluxes which are related to solar flares or interplanetary shocks originating from the Sun. Solar energetic particle transport modelling requires a systematic understanding of the properties of the heliosphere. In the current modelling of particle transport in the heliosphere, it is assumed that the interplanetary medium has a steady-state solar wind and that the magnetic field in the heliosphere follows a Parker spiral. The presence of coronal mass ejections (CMEs) or interplanetary coronal mass ejections (ICMEs) in the heliosphere could cause interference with the solar wind and the magnetic field in the heliosphere. In this project we analyse two heliospheric modelling tools, called ENLIL and ENLIL-with-cone models, to see how accurately they could describe the heliosphere in the presence of Coronal mass ejections. To realize this goal we investigated the SEP events of the 23rd solar cycle. At first we investigated 114 SEP events recorded in this cycle for their relationships with CMEs and ICMEs. First, we investigated whether the SEP events could be related to ICME using time-window analysis and the position of the ICME when the SEP event was recorded. Using this process we identified 43 SEP events that are ICME-clean (not related to any ICME according to the two criteria we set). We then modelled the ICME-clean events using ENLIL modelling. We further analysed the ICME-clean events if they have any relation to CMEs. We narrowed our search only to SEP events that have three or less CME that could be associated with them. We then produced a plot for these SEP events to further study the relation between the SEP and the CMEs. We singled out the SEP event that was recorded on May 9, 1999 as a perfect candidate to be further analysed using ENLIL-with-cone model. This event is chosen because it is associated with a fast northward CME that expands into the western hemisphere and could possibly have accelerated the SEP towards Earth. When analysed with ENLIL-with-cone model, we found out that the CME interfered with the magnetic field lines that are directed towards Earth, thus providing a likely origin for the observed SEP event at 1AU. Though the contact between the CME and the Earthward field lines was very brief, it disrupted the Parker spiral structure of the magnetic field lines. From the statistical analysis of the ICMEs and CMEs during the large SEP events of the 23rd solar cycle, we deduced that the two assumptions used in the modelling of heliospheric SEP transport (steady-state solar wind and Parker spiral structure of the magnetic field) could not be made in typical cases. However, more advanced descriptions of the heliospheric field like ENLIL-with-cone could be utilized for modelling instead. From this project we concluded that a future heliospheric modelling tools need to encompass more factors than the two assumptions discussed above.
-
(2017)Tässä työssä tarkastellaan annoksen ja pinta-alan tulon mittarin (DAP-mittarin) toimintaa ja käyttäytymistä pienissä säteilyannoksissa, jossa DAP-arvot ovat matalia. Lasten tutkimuksissa käytetään pieniä kuvausarvoja, jonka seurauksena lapsipotilaaseen kohdistuu matalia annoksia. Lasten thorax-tutkimuksissa potilaaseen kohdistuva keskimääräinen DAP-arvo on 19 mGy x cm^2. DAP-arvon tarkkuus matalissa annoksissa on tärkeä, sillä lapsuudessa saatu säteilyaltistus aiheuttaa suuremman riskin kuin vastaava altistus aikuisiässä. Lapset ovat säteilysuojelun kannalta erityisasemassa ja lasten tutkimusten oikeutusharkintaan ja optimointiin tulee kiinnittää erityistä huomiota. Tutkimuksessa DAP-mittarin tarkkuutta matalissa annoksissa tarkasteltiin käyttäen pinta-ala menetelmän kalibrointia. Kalibrointi tapahtui siten, että DAP-mittareita käytettiin kenttämittareina ja Raysafe Xi-annosmittaria vertailumittarina. Toisin sanoen DAP-mittarista saatuja arvoja tarkasteltiin vertaamalla niitä Raysafe Xi-annosmittarin arvoihin. DAP-mittari on kiinnitetty röntgenputken eteen ja pinta-ala menetelmässä annosmittari asetetaan röntgenputken alapuolelle säteilykeilaa vasten. Tällöin kuvaamisessa molemmat mittarit altistuvat säteilylle samanaikaisesti. Tulokseksi saatiin, että DAP-mittarit ovat kalibroitu korkean kuvausjännitteen, sähkömäärän ja ilman lisäsuodatuksen avulla, eikä kalibroinnissa ole otettu huomioon matalia annoksia. Tutkimalla DAP-mittarin tarkkuutta matalissa annoksissa huomattiin, että DAP-mittaria koskeva laitevaatimus, jossa näyttämä saa poiketa oikeasta arvosta enintään 25 %, ei toteudu AGFA DX-D600 ja FUJI FDR Acselerate röntgenlaitteella DAP-arvon ollessa 0-4 mGy x cm^2 välillä. Tällöin näiden kahden röntgenlaitteiden DAP-mittareista saadut DAP-arvot eivät ole luotettavia DAP-arvojen ollessa alle 4 mGy times cm^2.
-
(2018)A new theoretical model for the structure of glasses is presented and used to study the boson peak found in glasses. The model is based on a simple lattice model familiar from crystals, which is disordered using techniques from noncommutative fluid models. First classical crystal models and concepts of lattice vibrations are reviewed, focusing on acoustic and optical waves, the density of vibrational states, heat capacity and the Debye model. Then noncommutative fluid theory and noncommutative geometry are shortly introduced to show the connection to fluids in our model. After these introductions, the glass model is formulated and used to calculate the dispersion relations, the density of vibrational states and the heat capacity. The density of states has a Van Hove singularity at low frequencies, which generates the boson peak seen in experiments. The glass is found to have both acoustic and optical waves, and the acoustic waves are located very close to the frequency of the Van Hove singularity, which hints that the boson peak should be related to acoustic waves.
-
(2018)We review basics of quantum field theories (QFT) and lattice field theories (LFT). We present, evaluate, and compare possible solutions for creating portable high performance LFT simulation programs. We choose one of the possible solutions, creating our own programming language, discuss its features the our prototype of it. Last we present improvement ideas to the implemented solution.
-
(2017)Hydrografian seuranta Itämerellä on hoidettu perinteisesti tutkimusaluksilla, joiden operoiminen on kallista ja joilla päästään seurantapisteille tekemään luotauksia vain muutaman kerran vuodessa. Ilmatieteen laitos on vuodesta 2011 testannut ja operoinut automaattisia profiloivia Argo-poijuja Selkämerellä. Poijut tarjoavat uudenlaisen menetelmän hydrografian ja syvien virtausten mittaamiseen, mutta Itämeren mataluus ja vähäsuolainen vesi aiheuttavat haasteita niiden operointiin. Tässä tutkielmassa käsitellään Ilmatieteen laitoksen Argo-poijujen viiden ensimmäisen operatiivisen vuoden (2012--2016) profiilimittauksia. Tärkeimpiä tutkimuskysymyksiä ovat, mitä poijujen data kertoo Selkämerestä itsenäisenä datasettinä, missä poijut toimivat paremmin suhteessa perinteiseen seurantaan ja missä eivät, ja miten poijuja kannattaisi käyttää osana olemassa olevaa havaintoverkkoa. Argo-poijujen datan perusteella lasketut keskimääräiset lämpötilan ja suolaisuuden arvot Selkämerellä olivat lähellä kirjallisuudessa esitettyjä, joskin pohjakerroksen suolaisuus oli noin 0,5\,g\,kg$^{-1}$ matalampi. Tämän arveltiin johtuvan osin siitä, etteivät kaikki profiilit ulottuneet pohjaan asti, ja osittain lyhyestä aikasarjasta verrattuna kirjallisuuden klimatologisiin keskiarvoihin. Vuosi 2014 havaittiin poikkeukselliseksi sekä pintalämpötilan että pohjakerroksen suolaisuuden osalta. Korkean suolaisuuden todettiin olevan todennäköisesti merkki varsinaiselta Itämereltä Selkämerelle tulleesta tavallista suuremmasta vesimäärästä. Argo-poijut tuottavat profiileja vesipatsaan hydrografiasta huomattavasti tiheämmin, kuin perinteiset seurantamatkat, joten niiden vahvuus on laivaseurantaan verrattuna lyhytaikaisten muutosten seurannassa. Virtauksia on myös mahdollista arvioida Argo-poijujen ajelehtimisnopeuden perusteella. Selkämeren syvänteelle lasketut virtausnopeudet olivat lähellä kirjallisuudessa esitettyjä (1,4--4,8\,cm\,s$^{-1}$). Virtausnopeudet ovat ennemminkin hydrografiamittausten sivutuote, kun Argo-poijujen käyttötarkoitus, nopeuksien arvioinnissa on paljon virhelähteitä, jotka yhdistettynä matalaan merialueeseen hankaloittavat tulosten tulkintaa. Kaiken kaikkiaan Argo-poijut todettiin Selkämerellä toimiviksi merialueen haaseista huolimatta. Poijut tuottavat profiilidataa tiheydellä, joka on aiemmin ollut saavuttamattomissa laivaseurannan kalleuden takia. Ensimmäisinä operatiivisina vuosina Selkämeren Argo-poijut ovat tuottaneet syvännealueen profiilidatasta jopa 80\,\% (verrattuna HELCOMin seuranta-asemilla tehtyihin mittauksiin). Tulevaisuudessa olisi kiinnostavaa kokeilla poijujen ohjautumista esimerkiksi Ahvenanmerellä, ja tutkia, miten tällä hetkellä (10/2017) Selkämerellä yhtäaikaa mittaavat kolme poijua kuvaavat koko Selkämerta ja olisiko tämä sopiva määrä poijuja yhtäaikaisessa operatiivisessa seurannassa.
-
(2015)Keväällä auringonsäteilyn lisääntyessä alkaa jääpeitteisillä järvillä sulamiskausi, jolloin ensin lumi ja sen jälkeen jääkansi sulavat. Lumeton jää läpäisee jonkin verran auringonsäteilyn näkyvää valoa, joka lämmittää jään alla olevaa vettä ja aiheuttaa siten epästabiilin tiheyskerrostuneisuuden sekä konvektiivisen sekoittumisen alkamisen. Koska jäätyvien järvien jääpeitekauden kesto on lyhentynyt edellisen vuosisadan aikana, mikä voi vaikuttaa esimerkiksi arktisten maa-alueiden ilmastoon ja järvien ekologiaan, tarvitaan jäidenlähdön ja järvien sulamiskauden fysikaalisten prosessien tutkimusta, jotta veden, jään ja ilmakehän välinen vuorovaikutus ymmärrettäisiin ja voitaisiin mallintaa paremmin. Kilpisjärvi (69 03'N 20 50'E) on Suomen Käsivarren Lapissa sijaitseva arktinen tundra-järvi. Sen jäätalvi kestää tavallisesti marraskuusta kesäkuuhun, ja järven jää saavuttaa vuosittain noin 90 cm paksuuden. Keväällä 2013 tehtiin Kilpisjärvelle kenttämatka, jonka tavoitteena oli tutkia arktisten järvien fysikaalisia prosesseja sulamiskaudella, ja jonka aikana kerättyjä mittauksia jään ominaisuuksista sekä veden lämpötilasta analysoidaan tässä tutkielmassa. Kilpisjärvellä havainnoitiin jään sulamista ja mitattiin jään läpi tunkeutuvan PAR-säteilyn määrää sekä tutkittiin järven jäästä otetusta näytteestä kiderakennetta. Järveen asennetut termistoriketjut mittasivat veden lämpötilan kehitystä järven eri syvyyksillä, ja CTD-luotauksilla seurattiin lämpötilan alueellista vaihtelua järven eri osissa sekä säteilyn vaimenemista vesirungossa. Helsingin Yliopiston Kilpisjärven biologiselta asemalta ja Ilmatieteenlaitoksen Kilpisjärven kyläkeskuksen sääasemalta saatiin lisäksi säätietoja, joiden perusteella laskettiin jään lämpötase. Kenttäjakson aikana 25.5.-4.6. Kilpisjärvellä vallitsivat poikkeukselliset olosuhteet, sillä korkeimmillaan ilman lämpötila kohosi +25 Celsiusasteeseen. Järven jää suli nopeasti 4-5 cm vuorokaudessa, mikä johti aikaiseen jäidenlähtöön 3. kesäkuuta. Vaikka lämpötasetta hallitsi auringonsäteily, olivat myös havaittavan lämmön vuo ja pitkäaaltoisen säteilyn netto jäätä lämmittäviä tekijöitä. Pelkästään kohvajäästä koostuvan jään läpäisykyky oli korkea, 0,6-0,9, ja valon vaimenemiskertoimelle saatiin arvoja väliltä 0,2 m−1 ja 0,8 m−1. Jään alla irradianssin taso oli keskimäärin 155 W m−2, ja tämä lämmitysteho aiheutti veden lämpötilan nopean kasvun jään alla jo ennen jäidenlähtöä. Alueellinen vaihtelu oli kuitenkin selkeä, sillä litoraalialueet lämpenivät nopeammin kuin pelagiaalialueet. Tutkimus osoitti, että lähellä jäidenlähtöä auringonsäteily on merkittävin vettä lämmittävä tekijä, jolloin sen ajama konvektio aiheuttaa veden tehokkaan sekoittumisen. Jään läpäisseen säteilyn määrä ja veden lämpeneminen vaihtelevat kuitenkin todennäköisesti järven eri osien välillä epähomogeenisen jääpeitteen takia, mistä voi seurata tiheyserojen ajamien konvektiivisten solujen syntyminen ja siten monimutkainen virtausrakenne jään alla. Tästä ei saatu havaintoja käytetyillä menetelmillä, joten vastaavissa myöhemmissä tutkimuksissa olisi mittaukset suunniteltava kattamaan laajempi alue järveä.
-
(2017)Vacuum arc electrical breakdowns cause problems in many appliances operating in high electric field, such as the Compact Linear Collider (CLIC), a proposed next-generation particle accelerator in CERN. The breakdown phenomenon is not well-understood despite decades of research. Diffusive mass transport in metallic surfaces under electric fields is hypothesised to play a role in the events leading to breakdowns. Kinetic Monte Carlo (KMC) is a well established simulation method for studying diffusion. The weakness of KMC is that it requires knowledge of the rates of all processes that can happen during simulation: in the case of diffusion, these are migration events of mobile objects. The rates can be found from migration barriers, which in turn can be calculated using various methods. In this thesis, the parametrisation scheme of an existing atomistic KMC model for studying Cu surface diffusion was improved. In this model, the migration barrier is a function of the local environment of the migrating atom. The barriers in different environments were calculated with the nudged elastic band (NEB) method. It is an accurate way of finding barriers, but too expensive to be used for calculating them all in the improved parametrisation scheme. This problem was treated with a multidisciplinary approach of training an artificial neural network (ANN) to predict the barriers, using a limited dataset calculated with the NEB method. Good prediction performance was achieved for the case of stable migration processes on smooth surfaces, and the predictor function was found to be sufficiently fast to be called during KMC runtime.
-
(2015)The SM, conceptually and phenomenologically fails to incorporate and explain few fundamental problems of particle physics and cosmology, such as a viable dark matter candidate, mechanism for inflation, neutrino masses, the hierarchy problem etc. In addition, the recent discovery of the 125 GeV Higgs boson and the top quark mass favor the metastablility of the electroweak vacuum, implying the Higgs boson is trapped in a false vacuum. In this thesis we propose the simplest extension of the SM by adding an extra degree of freedom, a scalar singlet. The singlet can mix with the Higgs field via the Higgs portal, and as a result we obtain two scalar mass eigenstates (Higgs-like and singlet-like). We identify the lighter mass eigenstate with the 125 GeV SM Higgs boson. Due to the mixing, the SM Higgs quartic coupling receives a finite tree level correction which can make the electroweak vacuum completely stable. We then study the stability bounds on the tree level parameters and determine the allowed mass region of the heavier mass eigenstate (or singlet-like) for range of mixing angles where all the bounds are satisfied. We also obtain regions of parameter space for different signs of the Higgs portal coupling. In the allowed region, the singlet-like state can decay into two Higgs-like states. We find the corresponding decay rate to be substantial. Finally, we review various applications of the singlet extension, most notably, to the problem of dark matter and inflation.
-
(2016)This thesis discusses various topics related to the study of strongly coupled quantum field theories at finite density or, equivalently, finite chemical potential. In particular, the focus is on the theory of strong interactions, quantum chromodynamics (QCD). Finite-density QCD is important in the description of numerous physical systems such as neutron stars or heavy-ion collisions, a brief overview of which are given, alongside with the QCD phase diagram as motivational examples. After this, the general construction of a Lagrangian finite-density quantum field theory is described. In contrast with the zero-density setting, a finite-density field theory does not admit a simple description on the lattice, rendering this standard approach to strongly coupled theories impractical due to the so-called sign problem. Various attempts of addressing the sign problem are reviewed, and the so-called Lefschetz thimble approach and the complex Langevin method are discussed in detail. Some mathematical details related to these approaches are elaborated in the appendices. Due to the impracticality of lattice methods, a perturbative description becomes more important at finite density. Perturbative finite-density QCD and methods useful in practical calculations are discussed. Amongst them is a detailed proof of a set of so-called 'cutting rules' that apply to zero-temperature finite-density quantum field theory, an example computation using these rules as well as a discussion on various divergences and their relation to zero-density theory.
-
(2016)This thesis is a case study of the impact of urban planning on local air quality along a planned city boulevard in western Helsinki. The aim of this study is to analyse ventilation and dispersion of traffic-related air pollutants inside street canyons and courtyards in four alternative city block design versions. In particular, whether the format and variation of building height can improve air quality in future planned neighbourhoods and as such, help improve the decision-making process in city planning. The study employs a large-eddy simulation (LES) model PALM with embedded Lagrangian stochastic particle and canopy models to simulate transport of pollutants (air parcels) and the aerodynamic impact of street trees and a surrounding forest on pollutant transport. The embedded models are revised by the author to take into account the horizontal heterogeneity of the particle sources and plant canopy. Furthermore, three-dimensional two-way self-nesting is used for the first time in PALM in this study. High-resolution simulations are conducted over a real urban topography under two contrasting meteorological conditions with neutral and stable stratification and south-western and eastern wind direction, respectively. The comparison of the different boulevard-design versions is based on analysing the temporal mean particle concentrations, the turbulent vertical particle flux densities and the particle dilution rate. Differences in flux densities between the versions show a strong dependence on urban morphology whereas the advection-related dilution rate depends on the volume of unblocked streamwise street canyons. A suggestive ranking of the versions is performed based on the horizontal mean values of the analysis measures (separately for the boulevard, the other street canyons, the courtyards and the surroundings). Considering both meteorological conditions, the design version with variable building height and short canyons along the boulevard outperforms the other design versions based on the ranking. This is especially pronounced in stable conditions. Surprisingly, variability in building shape did not induce clear improvements in ventilation. This is the first high-resolution LES study conducted over a real urban topography applying sophisticated measures to assess pollutant dispersion and ventilation inside street canyons and courtyards.
-
(2016)Physical and mechanical properties of drill core specimens were determined as a part of investigations into excavation damage in the dedicated study area in ONK-TKU-3620. The main goal of this study was to find indicators of excavation damage in the form of anomalous physical properties linked to increased porosity or lower mechanical strength. Geophysical indicators are desired for their ease, speed and cost-effectiveness. The secondary goal was to find associations between dynamic and static elastic properties, to allow estimation of rock mechanical properties using geophysical measurements. The parameters most sensitive to the presence of (saline) pore space fluid showed depth dependencies. Resistivity showed abnormally low values in the first 0.2 m, and an increase with depth in the first 0.7 m from the study area surface. S-velocity, shear impedance, shear modulus and Young's modulus all showed abnormally low values in the first 0.2 m from the study area surface. In addition to clear depth dependencies, other indicators of excavation damage were found. Specimens in the first 0.7 m from the study area surface showed increased proportion of high (> 0.5 %) porosity values. Combinations of high porosity/shallow depth, low resistivity/shallow depth, high porosity/low resistivity, low IP/shallow depth and low IP/high porosity also seem to separate anomalous specimens. S-velocity, P/S -ratio, Poisson's ratio and all three impedances in respect to depth separated anomalous specimens. Abnormally high S-velocity in respect to other elastic properties also seemed to separate anomalous specimens. On one of the anomalous specimens, the presence of an EDZ feature was confirmed by Posiva geologist. This specimen could be identified based on S-velocity, P/S -ratio, Poisson's ratio and all three impedances. Best indicators for excavation damage based on this study would appear to be resistivity, S-velocity, shear impedance, shear modulus and Young's modulus. Most of the other elastic parameters in conjunction with other parameters could be used to identify anomalous specimens. The results support the use of electrical and seismic methods to identify excavation damage. Estimation of static elastic properties based on dynamic elastic properties does not appear possible based on this study. The views and opinions presented here are those of the author, and do not necessarily reflect the views of Posiva.
-
(2016)Estimates for asteroid masses are based on gravitational perturbations on the orbits of other objects such as Mars, spacecraft, or other asteroids and/or their satellites. In the case of asteroid-asteroid perturbations, this leads to a 13-dimensional inverse problem where the aim is to derive the mass of the perturbing asteroid and six orbital elements for both the perturbing asteroid and the test asteroid using astrometric observations. We have developed and implemented three different mass estimation algorithms utilizing asteroid-asteroid perturbations into the OpenOrb asteroid-orbit-computation software: the very rough 'marching' approximation, in which the asteroid orbits are fixed at a given epoch, reducing the problem to a one-dimensional estimation of the mass, an implementation of the Nelder-Mead simplex method, and most significantly, a Markov-Chain Monte Carlo (MCMC) approach. We introduce each of these algorithms with particular focus on the MCMC algorithm, and present example results for both synthetic and real data. Our results agree with the published mass estimates, but suggest that the published uncertainties may be misleading as a consequence of using linearized mass-estimation methods. Finally, we discuss remaining challenges with the algorithms as well as future plans.
-
(2018)The purpose of this study is to research what factors affect a student’s likelihood of successfully reaching his or her goal of becoming a scientist or, more specifically, a physicist. Academic achievement has long been associated with intelligence. This restricted view has not been comprehensive enough and has lacked the study of how noncognitive personality traits relate to success. In this study factors relating to skills and talent in physics, as well as to personality, will be analysed. Their relationship to a physics attitude assessment test will be investigated as well. Three questionnaires were used to gather data for this study: the Grit Survey, the Colorado Learning Attitudes about Science Survey (CLASS) and the Force Concept Inventory (FCI). Grit measures a person’s perseverance of effort and consistency of interest. It is a personality trait that is thought to predict success better than previous personality constructs have. CLASS measures expert-like thinking of students in a physics context. In this study CLASS is used as a predictor of academic achievement. FCI measures a student’s conceptual understanding of the force concept in Newtonian physics. In this study FCI is used as an evaluator of a student’s skills and talent in physics. The sample studied in this thesis consisted of 71 students attending a first-year physics course at the University of Helsinki. 43 of the participants were male and 28 female. Most of the students were students that had decided to major in physics. Results from correlation analysis between grit and CLASS show that no significant relationship was found between these two factors, with a correlation coefficient of only r=0.181 (p=0.131). FCI and CLASS did show a correlation of r=0.312. Correlations between the factors of grit and CLASS were also analysed in this study. Grit did not correlate with CLASS, a predictor of academic achievement. FCI, an evaluator of skill and talent in physics did correlate with CLASS. Grit as a whole does not seem to relate to academic achievement. But its dimension of effort does. Grit effort correlated with CLASS with a correlation coefficient of r=0.241 (p=0.043). The two dimensions of grit seem to measure different things and it is important to be able to analyse them separately.
-
(2012)Graphene is the ultimately thin membrane composed of carbon atoms, for which future possibilities vary from desalinating sea water to fast electronics. When studying the properties of this material, molecular dynamics has proven to be a reliable way to simulate the effects of ion irradiation of graphene. As ion beam irradiation can be used to introduce defects into a membrane, it can also be used to add substitutional impurities and adatoms into the structure. In the first study introduced in this thesis, I presented results of doping graphene with boron and nitrogen. The most important message of this study was that doping of graphene with ion beam is possible and can be applied not only to bulk targets but also to a only one atomic layer thick sheet of carbon atoms. Another important result was that different defect types have characteristic energy ranges that differ from each other. Because of this, it is possible to control the defect types created during the irradiation by varying the ion energy. The optimum energy for creating a substitution for N ion is at about 50 eV (55%) and for B ion it is ca. 40% at about the same energy. Single vacancies are most probably created at an energy of about 125 eV for N (55%) and for B at ca. 180 eV (35%). For double vacancies, the maximum probabilities are roughly at 110 eV for N (16%) and at 70 eV for B (6%). The probabilities for adatoms are the highest at very small energies. A one atom thick graphene membrane is reportedly impermeable to standard gases. Hence, graphene's selectivity for gas molecules trying to pass through the membrane is determined only by the size of the defects and vacancies in the membrane. Gas separation using graphene membranes requires knowledge of the properties of defected graphene structures. In this thesis, I presented results of the accumulation of damage on graphene by ion irradiation using MD simulations. According to our results, graphene can withstand up to 35% vacancy concentrations without breakage of the material. Also, a simple model was introduced to predict the influence of the irradiation during the experiments. In addition to the specific results regarding ion irradiation manipulation of graphene, this work shows that MD is a valuable tool for material research, providing information on atomic scale rarely accessible for experimental research, e.g., during irradiation. Using realistic interatomic potentials MD provides a computational microscope helping to understand how materials behave at the atomic level.
-
(2012)Magnetic resonance imaging (MRI) provides spatially accurate, three dimensional structural images of the human brain in a non-invasive way. This allows us to study the structure and function of the brain by analysing the shapes and sizes of different brain structures in an MRI image. Morphometric changes in different brain structures are associated with many neurological and psychiatric disorders, for example Alzheimer's disease. Tracking these changes automatically using automated segmentation methods would aid in diagnosing a particular brain disease and follow its progression. In this thesis we present a method for automatic segmentation of MRI brain scans using parametric generative models and Bayesian inference. Our method segments a given MRI scan to 41 different structures including for example hippocampus, thalamus and ventricles. In contrast to the current state-of-the-art methods in whole-brain segmentation, our method does not pose any constraints on the MRI scanning protocol used to acquire the images. Our model is based on two parts: the first part is a labeling model that models the anatomy of the brain and the second part is an imaging model that relates the label images to intensity images. Using these models and Bayesian inference we can find the most probable segmentation of a given MRI scan. We show how to train the labeling model using manual segmentations performed by experts and how to find optimal imaging model parameters using expectation-maximization (EM) optimizer. We compare our automated segmentations against expert segmentations by means of Dice scores and point out places for improvement. We then extend the labeling and imaging models and show, using a database consisting of MRI scans of 30 subjects, that the new models improve the segmentations compared to the original models. Finally we compare our method against the current state-of-the-art segmentation methods. The results show that the new models are an improvement over the old ones, and compare fairly well against other automated segmentation methods. This is encouraging, because there is still room for improvement in our models. The labeling model was trained using only nine expert segmentations, which is quite a small amount, and the automated segmentations should improve as the number of training samples grows. The upside of our method is that it is fast and generalizes straightforwardly to MRI images with varying contrast properties.
-
(2014)Fysiikka mielletään helposti teoreettiseksi oppiaineeksi, vaikka se onkin lähtökohdiltaan kokeellinen tieteenala. Kokeellisen toiminnan järjestäminen luokkahuoneessa voi kuitenkin olla haastavaa, etenkin modernin fysiikan aihepiirissä. Avoin hiukkasfysiikan tutkimusdata mahdollistaa kokeellisuuden ja aidon tutkimuksen tekemisen hiukkasfysiikan parissa. Hyödyntämällä tutkivan oppimisen pedagogiikkaa voidaan tiedonkäsittelytaidot, yhteistyötaidot ja hiukkasfysiikka yhdistää luokkahuoneessa toteutettavaksi toiminnaksi. Tutkielma toteutettiin kehittämistutkimuksena, joka muodostui kahdesta kehittämissyklistä. Tutkimuksen kehittämisvaiheisiin sisältyivät Masterclass-tapahtuman yhteydessä suoritettu tapaustutkimus sekä fysiikan lukio-opettajille suunnattu kyselytutkimus. Kyselytutkimuksen avulla pyrittiin selvittämään fysiikan lukio-opettajien suhtautumista avoimen hiukkasfysiikan tutkimusdatan opetuskäyttöä kohtaan. Opettajakyselyn tulosten perusteella avoin hiukkasfysiikan tutkimusdata soveltuisi hyvin lukio-opiskeluun. Aihe-alue kiinnosti selkeästi opettajia ja aiheen uskottiin kiinnostavan myös opiskelijoita. Suurin osa (80,3%) olisi valmis hyödyntämään opetuksessaan avointa hiukkasfysiikan tutkimusdataa. Opetusta kehitettäessä tulisi opettajien kokemusten perusteella huomioida ajankäytölliset rajoitteet, tietotekniset rajoitteet, opiskelijoiden erilaiset taito-, tieto- ja motivaatiotasot, opettajan tietotason asettamat haasteet, materiaalin hyvä ohjeistus sekä opetuksen keskittyminen ydinainekseen. Lisäkoulutukselle ja etenkin suomenkieliselle tukimateriaalille olisi tutkimuksen perusteella tarvetta. Avoin hiukkasfysiikan tutkimusdata soveltuisi opettajien mielestä hyödynnettäväksi useallakin eri fysiikan kurssilla ja muutamalla pitkän matematiikan kurssilla. Aikataulullisten resurssien rajallisuus rajoittaisi aiheen parissa käytettävän ajan Aine ja säteily -kurssilla vajaaseen kahteen oppituntiin. Fysiikan koulukohtaisilla kursseilla opettajat olisivat valmiita käyttämään aiheen parissa jopa noin kahdeksan oppituntia. Riittävillä tuki- ja ohjaustoimilla sekä suomenkielisen, luokkatilanteeseen soveltuvan materiaalin kehittämisellä avointa hiukkasfysiikan tutkimusdataa olisi mahdollista hyödyntää laajemminkin lukioiden matemaattis-luonnontieteellisillä kursseilla. Aihe voisi osaltaan edesauttaa fysiikan opiskelijoiden lukumäärän lisäämistä sekä opiskelijoiden sukupuolten välisen jakauman tasoittamista. Tutkimuksen tuloksena syntyi visioita avoimen hiukkasfysiikan tutkimusdatan hyödyntämisestä opetuksessa. Kehittämisen tuloksena muotoiltiin didaktinen rekonstruktio avoimen hiukkasfysiikan tutkimusdatan opetuskäytöllisestä hyödyntämisestä käyttäen esimerkkinä avointa, hiukkastutkimuskeskus Cernin CMS-kokeesta saatavaa tutkimusdataa. Tutkimuksen kautta saatiin myös tietoa fysiikan lukio-opettajien suhtautumisesta hiukkasfysiikkaan, tietoa hiukkasfysiikan opetuksesta lukioissa sekä tietoa opiskelijoiden suhtautumisesta informaaliin hiukkasfysiikan opetukseen Masterclass –tapahtuman yhteydessä.
-
(2013)Begreppskartor utvecklades ursprungligen på 1980-talet av Joseph Novak och Bob Gowin som ett sätt att strukturera och därmed få djupare förståelse för ny kunskap. Sedan dess har de utvecklats och undersökts som inlärnings-, utbildnings- och bedömningsmetod, inte minsta av forskare som Maria Ruiz-Primo. Den här avhandlingen utgår till stor del från Ruiz-Primos utveckling av begreppskartan som ett verktyg för att bedöma elevers prestationer och undersöker huruvida traditionella prov kunde ersättas med begreppskartsuppgifter inom fysiken. Begreppskartor har en bevisligen positiv effekt vad gäller inlärning, men de används sällan i våra skolor och det är svårt att få elever att anamma en ny och ointuitiv inlärningsmetod. Att använda begreppskartan som en bedömningsmetod i klassen skulle knuffa eleverna mot ett mer begreppsbaserat tänkande och kunde hjälpa dem i sin förståelse av ämnet. Som provuppgift är begreppskartan snabb att göra och rätta men ger läraren goda insikter i elevens förståelse för ett ämne. Ifall man kunde tänka sig att använda begreppskartan som provuppgift åtminstone diagnostiskt skulle det föra med sig en hel drös fördelar. Två klasser undersöktes, en i grundskolan på årskurs sju och en kurs i gymnasiet. Bägge grupper fick lära sig grunderna i att göra begreppskartor på förhand, gymnasisterna hade lite erfarenhet av det också från tidigare. Båda grupperna fick sedan under en kurs skriva ett traditionellt prov samt göra en begreppskarta av materialet de nyligen gått igenom. Begreppskartorna bedömdes med en femstegs-modell utvecklad av Ruiz-Primo och Shavelson och gavs ett vitsord beroende på hur väl de jämfördes med en expertkarta. Resultaten undersöktes skilt för de två grupperna och granskades för svårighetsgrad, korrelation och överensstämning enligt en statistisk metod utvecklad av Bland och Altman 1986. Svårighetsgraden för uppgiften fanns vara lämplig, bägge gruppernas medeltal var något lägre för begreppskartan än för det traditionella provet vilket kan motiveras med att eleverna trots allt har större erfarenhet av traditionella prov än begreppskartor. Korrelationen för grundskolegruppen fanns vara god, 0,825 medanden för gymnasiet var betydligt svagare, 0,412. Bland-Altman metoden gav vidare negativa resultat för gymnasiet med mycket stora kast mellan de enskilda elevernas prestationer i de två uppgifterna. Grundskolegruppen presterade lite mer konsekvent men visade en trend där de verkligt svaga och de verkligt starka eleverna gynnades av begreppskartsuppgiften medan eleverna med medeltal kring 7 gjorde sämre ifrån sig än i det traditionella provet. Korrelationen för grundskolan är så pass stark att det är tänkbart att begreppskartan kunde användas som en bedömningsmetod inom grundskolan. Grundskolans begränsade matematik gör att också en stor del av naturvetenskaperna är fenomen- och begreppsbaserade snarare än baserade på problemlösning. I gymnasiet är det tvärtom, största delen av gymnasiekursen inom fysiken går ut på matematisk uträkning av fenomen, inte på att kunna förklara dem med ord och förstå samband. Som en följd av detta är begreppskartorna mer användbara som en alternativ bedömningsmetod i grundskolan än i gymnasiet.
-
(2012)Astrobiology is an interdisciplinary research field which studies the origin of life. One of the great challenges of modern observational astronomy in this area is to find building blocks of life in interstellar molecular clouds. These so-called biomolecules are under study in this thesis. First I present some fundamentals of radio spectroscopy and molecular structure. Then emphasis is put on observations made with the SEST- and APEX-telescopes of the objects NGC 6334F and IRAS 16293-2422. These objects represent the so-called hot cores. They are concentrations of gas and dust inside a molecular cloud where a new star is being born. One of the objectives in studying these hot sources is to find glycine NH2CH2COOH. Glycine is the most simple amino acid which means it is also a building block of our DNA. It has been hypothesised that several reaction chains may lead to glycine, either in gas phase or solid phase. Molecules that are part of the reaction chains that lead to glycine, are called precursors. Possible precursos of glycine are, for example, acetic acid and formic acid. The objective of this thesis is to find some of these precursors or their isomers. I have endeavoured to identify all molecular lines in the observed rotation spectra. This is done with the GILDAS/CLASS software package and specifically with its new Weeds extension. This has required estimation of the column density and then modifying its value until Weeds' model fits the observations. A lot of organic molecules were found. Glycine was not found and of its precursors only formic acid and methyl formate were found. However, the most interesting result was the detection of aminoethanol (NH2CH2CH2OH), which is a precursor of amino acid alanine (CH3NH2CHCOOH). Although this is only a tentative detection, it justifies a thorough investigation of future observations. Many of the observed spectral lines are blended so better resolution is needed. Many lines are also weak and are lost amid all the noise. The new ALMA interferometer will prove to be an invaluable tool in searching for new biomolecules. ALMA has very high angular and spectral resolution, high sensitivity, and large bandwidth. These properties are needed if we are to confirm or refute this new detection of aminoethanol.
Now showing items 21-40 of 372