Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Garcia Sturba, Sebastian (2023)
    Quantum field theory is often presented without clearly defined mathematical structures, especially in the case of field operators. We discuss axiomatic quantum field theory, where quantum fields and states are defined rigorously using distribution theory, alongside their assumed properties in the form of the Wightman axioms. We present the two key results that come from this construction, namely CPT symmetry and the spin-statistics connection. We then consider the construction of quantum fields in curved spacetime so as to discuss their behaviour in regions of large curvature, such as near black holes. This requires us to redefine fields and states in terms of *-algebras. We then present the GNS reconstruction theorem which allows us to get back the original definitions of these objects in Minkowski spacetime.
  • Karjalainen, Topias (2022)
    In recent years, there has been a great interest in modelling financial markets using fractional Brownian motions. It has been noted in studies that ordinary diffusion based stochastic volatility models cannot reproduce certain stylized facts that are observed in financial markets, such as the fact that the at the money (ATM) volatility skew tends to infinity at short maturities. Rough stochastic volatility models, where the spot volatility process is driven by a fractional Brownian motion, can reproduce these effects. Although the use of long memory processes in finance has been advocated since the 1970s, it has taken until now for fractional Brownian motion to gain widespread attention. This thesis serves as an introduction to the subject. We begin by presenting the mathematical definition of fractional Brownian motion and its basic mathematical properties. Most importantly, we show that fractional Brownian motion is not a semimartingale, which means that the theory of Itô calculus cannot be applied to stochastic integrals with fractional Brownian motion as integrator. We also present important representations of fractional Brownian motion as moving average process of a Brownian motion. In the subsequent chapter, we show that we can define a Wiener integral with respect to fractional Brownian motion as a Wiener integral with respect to Brownian motion with transformed integrand. We also present divergence type integrals with respect to fractional Brownian motion and an Itô type formula for fractional Brownian motion. In the last chapter, we introduce rough volatility. We derive the so called rough Bergomi model model that can be seen as an extension of the Bergomi stochastic volatility model. We then show that for a general stochastic volatility model, there is an exact analytical expression for the ATM volatility skew, defined as the derivative of the volatility smile slope with respect to strike price evaluated at the money. We then present an expression for the short time limit of the ATM volatility skew under general assumptions which shows that in order to reproduce the observed short time limit of infinity, the volatility must be driven by a fractional process. We conclude the thesis by comparing the rough Bergomi model to SABR- and Heston stochastic volatility models.
  • Richter, Stefan (2013)
    An algorithm for invariant mass reconstruction in a search for light charged Higgs bosons (H±) produced in top quark decays and decaying to a tau lepton and neutrino, H^± -> τ^± ν_τ, is presented. Here, 'light' means lighter than the top quark. The algorithm uses the top quark mass as a kinematical constraint to allow the calculation of the longitudinal momentum of the neutrinos. The invariant mass of the tau-and-neutrino system is then calculated using the missing transverse energy, the calculated longitudinal momentum of the neutrinos, and the measured momentum of the visible decay products of the tau lepton. Methods for resolving ambiguities and recovering unphysical results arising in the invariant mass reconstruction are presented. The invariant mass distribution could be used to extract a possible signal, replacing or complementing the transverse mass distribution that has been used so far in the analysis. In a preliminary data analysis using proton-proton collision data at sqrt(s) = 7 TeV corresponding to an integrated luminostity of 5.1 fb^(-1) recorded by the CMS experiment, it is shown that using invariant mass distribution obtained with the presented algorithm allows to set a more stringent upper limit on the signal branching fraction B(t -> H^± b) × B(H^± -> τ^± ν_τ) than does using the transverse mass distribution. An expected upper limit at the 95 % confidence level between around 0.37 % to 2.5 % (transverse mass) and 0.13 % to 1.9 % (invariant mass) is found, depending on the H^± mass. These results suggest that using the invariant mass reconstructed with the new algorithm may improve the signal sensitivity of the search.
  • Ersalan, Muzaffer Gür (2019)
    In this thesis, Convolutional Neural Networks (CNN) and Inverse Mathematic methods will be discussed for automated defect detection in materials that are used for radiation detectors. The first part of the thesis is dedicated to the literature review on the methods that are used. These include a general overview of Neural Networks, computer vision algorithms and Inverse Mathematics methods, such as wavelet transformations, or total variation denoising. In the Materials and Methods section, how these methods can be utilized in this problem setting will be examined. Results and Discussions part will reveal the outcomes and takeaways from the experiments. A focus of this thesis is put on the CNN architecture that fits the task best, how to optimize that chosen CNN architecture and discuss, how selected inputs created by Inverse Mathematics influence the Neural Network and it's performance. The results of this research reveal that the initially chosen Retina-Net is well suited for the task and the Inverse Mathematics methods utilized in this thesis provided useful insights.
  • Ihalainen, Olli (2019)
    The Earth’s Bond albedo is the fraction of total reflected radiative flux emerging from the Earth’s Top of the Atmosphere (ToA) to the incident solar radiation. As such, it is a crucial component in modeling the Earth’s climate. This thesis presents a novel method for estimating the Earth’s Bond albedo, utilising the dynamical effects of Earth radiation pressure on satellite orbits that are directly related to the Bond albedo. Where current methods for estimating the outgoing reflected radiation are based on point measurements of the radiance reflected by the Earth taken in the proximity of the planet, the new method presented in this thesis makes use of the fact that Global Positioning Satellites (GPS) together view the entirety of the ToA surface. The theoretical groundwork is laid for this new method starting from the basic principles of light scattering, satellite dynamics, and Bayesian inference. The feasibility of the method is studied numerically using synthetic data generated from real measurements of GPS satellite orbital elements and the imaging data from the Earth Polychromatic Imaging Camera (EPIC) aboard the Deep Space Climate Observatory (DSCOVR) spacecraft. The numerical methods section introduces the methods used for forward modeling the ToA outgoing radiation, the Runge-Kutta method for integrating the satellite orbits and the virtual-observation Markov-chain Monte Carlo methods used for solving the inverse problem. The section also describes a simple clustering method used for classifying the ToA from EPIC images. The inverse problem was studied with very simple models for the ToA, the satellites, and the satellite dynamics. These initial results were promising as the inverse problem algorithm was able to accurately estimate the Bond albedo. Further study of the method is required to determine how the inverse problem algorithm works when more realism is added to the models.
  • Anyatasia, Fayya (2023)
    This study researched the rationale behind creative practitioners for utilising or not utilising Text-to-Image Generative (TTIG) AI in their creative process. In addition, it also researches how the workflow of creative practitioners who are utilising this technology. A Reddit analysis of 331 posts and their comments and an online questionnaire with 92 participants is performed. The result showed that the rationale for creative practitioners not using TTIG is varied, including personal reasons, impact on the artist, ethical issues surrounding it, the AI-generated art itself, and their own creative workflow. On the other hand, motivation for using TTIG is mostly driven by the playfulness and usefulness of the system. Amplified by the benefit felt by the users for example source of inspiration, helping idea generation and exploration of new creative possibilities. There are mainly four types of workflow incorporating TTIG: to use it as reference only, use it as is, use it as a base, and use it as parts. We further discuss the implications of these findings and the author highlights the urgency of policymakers to create regulations safeguarding creative and their creations. The author also proposes to develop the system collaboratively with creative practitioners and the inclusion of AI in art education curricula.
  • Gao, Yuan (2016)
    Controlling a complicated mechanical system to perform a certain task, for example, making robot to dance, is a traditional problem studied in the area of control theory. However, evidence shows that incorporating machine learning techniques in robotics can enable researchers to get rid of tedious engineering works of adjusting environmental parameters. Many researchers like Jan Peters, Sethu Vijayakumar, Stefan Schaal, Andrew Ng and Sebastian Thrun are the early explorers in this field. Based on the Partial Observable Markov Decision Process (POMDP) reinforcement learning, they contributed theory and practical implementation of several benchmarks in this field. Recently, one sub-field of machine learning called deep learning gained a lot of attention as a method attempting to model high-level abstractions by using model architectures composed of multiple non-linear layers (for example [Krizhevsky2012]). Several architectures of deep learning networks like deep belief network [Hinton2006], deep Boltzman machine [Salakhutdinov2009], convolutional neural network [Krizhevsky2012] and deep de-noising auto-encoder [Vincent2010] have shown their advantages in specific areas. The main contribution of deep learning is more related to perception which deals with problems like Sensor Fusion [OConnor2013], Nature Language Processing(NLP)[Cho2014] and Object Recognition [Lenz2013][Hoffman2014]. Although considered briefly in Jürgen Schmidhuber's team[Mayer2006], the other area of robotics, namely control, remains more-or-less unexplored in the realm of deep learning. The main focus of this thesis is to introduce general learning methods for robot control problem with an exploration on deep learning method. As a consequence, this thesis tries to describe the transitional learning methods as well as the emerging deep learning methods including new findings in the investigation.
  • Sinausia, Daniel (2020)
    Along with the current studies on plasmonic photocatalysis that have emerged in the past years, we inquired into the catalytic performance of spherical gold and silver nanoparticles to further on investigate how their addition to doped molybdenum oxide (MoO3-x) could enhance its own plasmonic properties. Such studies were carried out synthesizing nanofilms on which the coupling oxidation of p-aminothiophenol (PATP) to p,p´-dimercaptoazobenzene (DMAB) was carried out using Surface Enhanced Raman Spectroscopy (SERS). The corresponding irradiation of 532 and 633nm light, observing satisfactory results with the nanoparticles and some conversion on the oxide when using the latter wavelength. Additional characterization of the different catalysts was performed using scanning electron microscopy (SEM) to observe their morphologies, confirming the spherical shape of the nanoparticles and the layered composition of the oxide. X-ray diffraction (XRD) was used to study the α-MoO3 structure of the semiconductor and the effects that the doping process had on it, showing how the distance between layers increased as we introduced more hydrides between them. These compounds were also analyzed with a spectrophotometer to study their performances when irradiated with light, showing a noteworthy increase in the absorbance after the reduction with the dopant. Finally, we tested the performance of these catalysts on the oxidation of benzyl alcohol when irradiated with 427 and 525 nm lamps, as well as with white xenon ones, not finding unfortunately any significant conversion during the time given though.
  • Pöllänen, Topias (2023)
    Bioorthogonal chemistry and click chemistry have gained tremendous attention during the past few years. They do not refer to a single reaction but to a class of reactions that take inspiration from nature. Click reactions are driven by a strong thermodynamic driving force and therefore they proceed via well-controlled and consistent reaction pathways. Click reactions afford specific products in high yields with negligible by-products. Bioorthogonal chemistry builds on the boundaries set by click chemistry. Bioorthogonal reactions can occur within a living system without interacting or interfering with the natural biological processes therefore both the reactants and the products must be inert and stable under physiological conditions. Bioorthogonal reactions have allowed the real-time study of several biomolecules such as glycans, lipids, nucleic acids, and proteins within living systems without cytotoxicity. In the literature section of the thesis, the most important physical properties of both the isonitrile and chlorooxime are introduced and the most important bioorthogonal reactions for both functionalities are highlighted. The bioorthogonal isonitrile-chlorooxime ligation is discussed in more detail and an example is given of how the ligation can be used to label the cell membrane of living cells with fluorescent moieties. To install a moiety on the cell membrane it must first be modified with small non-natural chemical functionalities also called “chemical reporters.” which can be installed with metabolic oligosaccharide engineering (MOE) using monosaccharide analogues. After installation of the “chemical reporters” onto the cell surface, molecules containing the corresponding bioorthogonal counterpart can be attached to the cell membrane. Lastly, the sulfur(VI) fluoride exchange (SuFEx) click chemistry is discussed. Traditionally, SuFEx click chemistry has been used in organic chemistry to build inorganic connecting bridges between two carbon centres. More recently, SuFEx chemistry has found utility in the radiosynthesis of fluorine-18 containing [18F]sulfonyl fluorides and [18F]fluorosulfates. Fluorine-18 is one of the most commonly used and important positron emitters utilized in radiopharmaceutical chemistry and positron emission tomography (PET). The experimental section of the thesis presents the synthesis routes of the bioorthogonal reaction partners, a peracetylated isonitrilepropanoylmannosamine (Ac4ManNC) and an aryl fluorosulfate chlorooxime. The research hypothesis of the study was that the isonitrile of the Ac4ManNC could be installed onto the cell surface of living Jurkat cells (human T lymphoblast) with MOE. Afterwards, the fluorine-18 labelled aryl [18F]fluorosulfate chlorooxime could have been attached to the cell surface from the isonitrile with the bioorthogonal isonitrile-chlorooxime ligation. This cell surface labelling method could have then been used in the future to research and develop cell therapy treatments by utilizing PET imaging.
  • Muff, Jake (2023)
    Quantum Monte Carlo (QMC) is an accurate but computationally expensive technique for simulating the electronic structure of solids, with its use as a simulation technique for modelling positron states and annihilation in solids relatively new. These simulations can support positron annihilation spectroscopy and help with defect characterisation in solids and vacancy identification by calculating the positron lifetime with increased accuracy and comparing them to experimental results. One method of reducing the computational cost of simulations whilst maintaining chemical accuracy is to employ pseudopotentials. Pseudopotentials are a method to approximate the interactions between the outer valence electrons of an atom and the inner core electrons, which are difficult to model. By replacing the core electrons of an atom with an effective potential, a level of accuracy can be maintained whilst reducing the computational cost. This work extends existing research with a new set of pseudopotentials with fewer core electrons replaced by an effective potential, leading to an increase in the number of core electrons in the simulation. With the inclusion of additional core electrons into the simulation, the corrections that need to be made to the positron lifetime may not be needed. Silicon is chosen as the element under study as the high electron count makes it difficult to model accurately for positron simulations. The suitability of these new pseudopotentials for QMC is shown by calculating the cohesive and relaxation energy with comparisons made to previously used pseudopotentials. The positron lifetime is calculated from QMC simulations and compared against experimental and theoretical values. The simulation method and challenges due to the inclusion of more core electrons are presented and discussed. The results show that these pseudopotentials are suitable for use in QMC studies, including positron lifetime studies. With the inclusion of more core electrons into the simulation a positron lifetime was calculated with similar accuracy to previous studies, without the need for corrections, proving the validity of the pseudopotentials for use in positron studies. The validation of these pseudopotentials enables future theoretical studies to better capture the annihilation characteristics in cases where core electrons are important. In achieving these results, it was found that energy minimisation rather than variance minimisation was needed for optimising the wavefunction with these pseudopotentials.
  • Hägg, Veera (2023)
    Nanodiscs are a synthetic model system for studying the behavior of cell membranes. They are used in experimental biological research to understand structural and functional properties of membrane proteins. Their utility is chiefly due to their water solubility and a relative native lipid environment for membrane proteins compared to other synthetic membrane systems. Though membrane proteins are frequently solubilized and stabilized in a nanodisc environment, the physical conditions that they are exposed to in a nanodisc have not been studied in detail. Additionally, the dynamic behavior of transmembrane proteins in a nanodisc environment has not been characterized with respect to a more typical planar bilayer environment. The results presented in this thesis formulate an answer to these open questions through atomistic molecular dynamics simulations and machine learning methods. Nanodiscs and bilayer systems with identical lipid compositions are systematically studied, and separately, both types of systems with adenosine receptor A2AR to understand the differences between the model systems. The membrane environment in the two systems is characterized by two well understood physical properties: the order parameter, and the diffusion of lipids in the membrane. The results not only affirm previous studies of nanodiscs but also provide novel insights into the membrane environment of the nanodisc systems. Finally, with the help of machine learning methods, the dynamical behaviour of the protein is shown to be significantly altered in the nanodisc system when compared to a planar bilayer environment. Specifically, it is shown that the activation behavior of A2AR is dependent on model system used to reconstitute the protein.
  • Yadav, Arihant (2024)
    The chemoenzymatic approach has been utilized for several decades to overcome the challenges of conventional synthesis methods and work towards an environmentally benign, greener approach. Recently, the use of recombinant enzymes has spiked to expand the scope for synthesizing complex molecules. The synthesis of non-steroidal and anti-inflammatory drugs (NSAIDs) is an eminent field of research in pharmaceutical sciences to enhance therapeutic efficacy while minimizing adverse side effects. The experimental work outlined in this thesis aimed to establish a chemoenzymatic synthesis route for Polmacoxib. The study compared the chemoenzymatic pathway coupled with photooxidation to the conventional route described in the literature, with the goal of identifying the most efficient synthesis method. The integration of chemoenzymatic approaches and photocatalysis represents a promising and sustainable method for synthesizing key intermediates in small-molecule drug compounds. The focus of this thesis work was the successful synthesis of the fiuranone motif, a key intermediate in the synthesis of polmacoxib, using this innovative approach. As part of the research for this thesis, the reaction conditions for photooxidation were screened and reported, followed by a comparative study between the traditional route and the envisioned route. Notably, the study found that the wavelength of light used significantly impacts the optimization of reaction conditions.
  • Sirbu, Léo (2024)
    Atmospheric aerosols are among the main components of the atmosphere, emitted by natural and anthropogenic sources, they play a significant role in climatic and health effects. With the current state of climate change and the consequences on human health, aerosols are among the central topics in atmospheric chemistry and environmental research. Studying the aerosol size distribution in the suburban areas is crucial to understand the direct impact of natural sources, chemical processes, and human activities on the aerosol distribution, impacting in turn human life and Earth ecosystem stability. In this thesis I investigated the aerosol and ion distribution at two suburban areas in Helsinki, the SMEAR-III station and the Viikki SMEAR-Agri station. The main instrument used in this thesis to measure the size distribution is the Neutral cluster and Air Ion Spectrometer (NAIS), while supporting information from gas monitors and mass spectrometry was used for gas-phase compounds. The aerosol and ion distribution features are studied regarding the local environmental differences between the stations and their connection to potential sources and atmospheric chemical processes. New Particle Formation (NPF) is a process contributing to the concentration of aerosols in the atmosphere, while aerosols can also be emitted or formed from anthropogenic sources such as traffic and industrial emissions. Gaseous vapours such as sulfur dioxide, sulfuric acid, nitrogen oxides, and highly oxygenated organic molecules contribute to atmospheric chemical reactions leading to aerosol formation. Thus, the connections between NPF events and gas-phase compounds with the aerosol and ion distribution was investigated. The findings of this thesis highlight the environmental features of each station leading to slight differences in the aerosols and ions distribution. Insights into the aerosol sources through connection between gaseous vapours, NPF events, traffic, and the aerosols and ions distribution are given.
  • Siurala, Samppa (2013)
    Tämän pro gradu –tutkielman tarkoituksena on perehtyä ionisten nesteiden (IL) käyttöön ja niiden vaikutukseen yleisimmissä analyyttisen kemian erotusmenetelmissä kuten kaasukromatografiassa (GC), nestekromatografiassa (LC) ja kapillaarielektroforeesissa (CE). Lisäksi ionisten nesteiden käyttöä ohutkerroskromatografiassa (TLC) käsitellään. Ioniset nesteet ovat sulia suoloja, jotka on yleisesti määritelty ioneista koostuviksi nestemäisiksi elektrolyyteiksi. Ionisissa nesteissä anionien ja kationien välillä merkittävä vuorovaikutusvoima ovat vetysidokset. Ioniset nesteet ovat kokonaisvaraukseltaan neutraaleja, mutta sisältävät positiivisen ja negatiivisen osittaisvarauksen. Ionisten nesteiden käytön monipuolisuutta lisäävät niiden olematon höyrynpaine, mekaaniset ja elektrokemialliset ominaisuudet sekä hyvä lämmönkestävyys, viskositeettien muokattavuus sekä niiden kyky uuttaa monia orgaanisia yhdisteitä sekä metalli-ioneja. Tässä tutkielmassa perehdytään ionisten nesteiden käyttöön liikkuvaan faasiin lisättävinä modifioijina (LC, CE) sekä kapillaarin tai kolonnin stationäärifaasin pintaan (LC, CE, GC, sekä kiinteään matriisiin TLC:ssä) lisättävänä tai kapillaarin/kolonnin pintaan valmiiksi sidottuina kiinteinä faaseina. Näiden ionisilla nesteillä muokattujen erotusmenetelmien vaikutusta erilaisten analyyttien erotukseen tutkitaan kirjallisuudesta löytyvin esimerkein. Kirjallisuusosassa tarkastellaan ionisten nesteiden tarkempia vaikutusmekanismeja erotukseen liikkuvassa ja kiinteässä faasissa sekä mahdollisista yhteisvaikutuksista em. faaseissa. Näitä analyyttien ja ionisilla nesteillä muokattujen erotusfaasien välisistä vuorovaikutuksista on havaittu mm. vetysidokset, van der Waals –voimat, heikot ja vahvat ioniset vetovoimat, n-Ï€- jaπ-Ï€–vuorovaikutukset, elektrostaattiset ja hydrofobiset/hydrofiiliset vuorovaikutukset sekä esim. retention kasvaminen ionisen nesteen alkyyliketjun pituuden kasvaessa. Ionisia nesteitä on käytetty laajasti esim. sitomaan kasvihuonekaasuja, katalyytteinä, rakettimoottoreiden polttoaineena, biomateriaalien liuottimina ja voiteluaineina sekä monella teknologian alalla kuten metallurgiassa, ydinteknologiassa, sähkökemiassa sekä elektrolyytteinä vaihtoehtoisen energian sovelluksista. Ionisten nesteiden ihmisille ja ympäristölle yleensä varsin vaarattomien ja haitattomien ominaisuuksiensa vuoksi ionisten nesteiden käyttösovellukset tulevat todennäköisesti tulevaisuudessa kasvamaan ja korvamaan perinteisiä liuottimia sekä toisaalta olemaan potentiaalisia työkaluja kehitettäessä jatkossa ympäristön ja kestävän kehityksen kannalta olennaista 'vihreää kemiaa'. Kokeellisessa osassa tutkittiin CE-analyyseissa fosfaattipuskuriajoliuokseen lisättyjen IL:en (P14444OAc- ja P14444Cl-) vaikutusta erotukseen. Näiden ionisten nesteiden todettiin parantavan tutkittujen yhdisteiden (19 kpl) erottumista kapillaarissa. Bentsoaattien homologiselle sarjalle (6 kpl) saatiin selkeä erottuminen kaikille yhdisteille ja lähellä toisiaan migroituvat o-, p- ja m-ksyleenit saatiin erottumaan. Kokeellisen osan havainnot tukevat hyvin kirjallisuudessa esiintyviä tuloksia ionisten nesteiden ominaisuuksista kapillaarin ja mahdollisesti ajoliuoksen muokkaajina, sillä tässä työssä suoritettujen analyysien tulokset osoittavat ionisten nesteiden selvästi parantavan tutkittujen analyyttien erotusta CE:lla.
  • Chellapermal, Robert (2018)
    It has proven to be challenging to detect and analyse the tens of thousands of precursors for secondary organic aerosol (SOA) species in ambient air. Models have shown that SOA in particular are still underestimated by an order of magnitude or more. Currently, instrumentation has evolved in the field of atmospheric-pressure mass spectrometers to be able to continuously identify the molecular species in ambient air at high resolution. This includes also neutral clusters if an ionisation mechanism is implemented. Yet, molecular-level information is still missing. The key questions include: What is the bulk density of atmospheric particles? What is the molecular composition of isomeric and isobaric compounds? Most high-resolution mass spectrometers are unable to answer these research questions, as the main output is merely a mass-to-charge (m/z) ratio. By measuring the electrical mobility of these particles, as well as the molecular composition, these questions can be answered. This motivated the atmospheric science community to combine mass spectrometry (MS) instruments with mobility instruments for measuring ambient aerosol and gas phases. An extensively well-defined and studied method, ion mobility spectrometry (IMS), has gained popularity in recent years due to its useful application in the fields of explosives detection and pharmaceutical compound analysis. By coupling an ion mobility spectrometer (IMS) with a time-of-flight mass spectrometer at ambient pressure (APiTOF), we aim at providing additional molecular-level information. With electrospray-ionisation (ESI) and a custom X-ray ion source, neutral clusters can also be detected. For the first time, we have successfully run a 2-month measurement campaign and analysed the data with this setup from a rural boreal forest site in Finland. Before this campaign, calibration of the effective true length of the drift region was performed based on known mobility chemicals (tetra-alkyl-ammonium halides) in the laboratory. The resulting effective length (~19 cm) was utilised to calculate reduced mobility, K0, for all detected masses. Knowing electrical mobility and mass, and assuming spherical particles, density was also computed. All measured compounds in the range of m/z=50 to 500, had lighter bulk density than water calculated. An additional tool linking particles with the same mobility was investigated to show the various applications of the new dataset.Cluster-level analysis of new particle formation (NPF) events and non-events could be further researched and utilised to obtain novel information about molecular clusters.
  • Luttikhuis, Thijs (2022)
    One of the most noticeable effects of solar–terrestrial physics is the aurora which regularly appears in the polar regions. This polar light is the result of the excitation of atmospheric species by charged particles originating from the solar wind and magnetosphere that enter the Earth’s atmosphere, which are called precipitating particles. We present the first results on auroral proton precipitation into the ionosphere using a global 3-dimensional simulation of near-Earth space plasma with the Vlasiator hybrid-Vlasov model, driven with a southward interplanetary magnetic field and steady solar wind parameters. The hybrid-Vlasov approach describes ions through their velocity distribution function in phase space (3-dimensional ordinary space and 3-dimensional velocity space), while electrons are represented by a massless charge-neutralizing fluid. Vlasiator is a global model describing the whole region of near-Earth space including the Earth’s magnetosphere (whole dayside and part of the magnetotail), the magnetosheath, as well as the foreshock region and some solar wind. The precipitating proton differential number fluxes for this run are determined from the proton phase-space density contained within the bounce loss-cone, which is set at a constant angle of 10 degrees everywhere. To determine the precipitation of particles at ionospheric altitudes (in this case a height of 110 km above the Earth’s surface), we trace magnetic field lines from the ionosphere to the inner boundary of the Vlasiator domain using the Tsyganenko model. With this, we obtain a magnetic local time–geomagnetic latitude map of differential number flux of precipitating protons in 9 energy bins between 0.5 and 50 keV. From the differential number flux, proton integral energy fluxes and mean energies can be obtained. The integral energy fluxes in the Vlasiator run are then compared to data of the Precipitation Electron/Proton Spectrometer (SSJ) instrument of the Defense Meteorological Satellite Program (DMSP) for several satellite overpasses during events with similar solar wind conditions as in the Vlasiator run. The SSJ instrument bins proton energies between 0.03 and 30 keV. Typical values of the total integral energy flux are between 5 · 10^6 and 5 · 10^7 keV cm−2 s−1 sr−1 in the cusp and between 1 · 10^6 and 3 · 10^7 keV cm−2 s−1 sr−1 in the evening sector for both Vlasiator and DMSP, although DMSP fluxes can locally be up to an order of magnitude higher. Additionally, global precipitation patterns in Vlasiator are compared to Ovation Prime, which is an empirical model based on data from DMSP which can be used to forecast precipitation of auroral electrons and protons. Although Ovation Prime shows a much wider cusp region compared to Vlasiator, both show similar maximum integral energy fluxes around 1 to 2 · 10^7 keV cm−2 s−1 sr−1 in the cusp region, and between 3 · 10^6 and 5 · 10^7 keV cm−2 s−1 sr−1 in the nightside oval.
  • Kukkola, Antti (2023)
    A stream of charged particles known as the solar wind constantly flows with supersonic speed in our solar system. As the supersonic solar wind encounters Earth's magnetic field, a bow shock forms where the solar wind is compressed, heated and slowed down. Not all ions of the solar wind pass through the shock but rather a portion are reflected back upstream. What happens to the reflected ions depends on the magnetic field geometry of the shock. In the case where the angle between the upstream magnetic field and the shock normal vector is small, the reflected ions follow the magnetic field lines upstream and form a foreshock region. In this case the shock is called quasi-parallel. In the case of a quasi-perpendicular shock, where the angle is large, the reflected ions gyrate back to the shock, accelerated by the convection electric field. Upon returning to the shock, the ions have more energy and either pass through the shock or are reflected again, repeating the process. Ion reflection is important for accelerating ions in shocks. In this work we study the properties and ion reflection of the quasi-perpendicular bow shock in Vlasiator simulations. Vlasiator is a plasma simulation which models the interaction between solar wind and the Earth's magnetic field. The code simulates the dynamics of plasma using a hybrid-Vlasov model, where ions are represented as velocity distribution functions (VDF) and electrons as magnetohydrodynamic fluid. Two Vlasiator runs are used in this work. The ion reflection is studied by analysing VDFs at various points in the quasi-perpendicular shock. The analysis is performed with reflections in multiple different frames. A virtual spacecraft is placed in the simulation to study shock properties and ion dynamics, such as the shock potential and ion reflection efficiency. These are compared to spacecraft observations and other simulations to test how well Vlasiator models the quasi-perpendicular bow shock. We find that the ion reflection follows a model for specular reflection well in all tested frames, especially in the plane perpendicular to the magnetic field. In addition, the study was extended to model second specular reflections which were also observed. We conclude that the ions in Vlasiator simulations are nearly specularly reflected. The properties of the quasi-perpendicular bow shock are found to be in quantitative agreement with spacecraft observations. Ion reflection efficiency is found to match observations well. Shock potential investigations revealed that spacecraft observations may have large uncertainties compared to the real shock potential.
  • Aluthge, Nishadh (2018)
    Exponential growth of Internet of Things complicates the network management in terms of security and device troubleshooting due to the heterogeneity of IoT devices. In the absence of a proper device identification mechanism, network administrators are unable to limit unauthorized accesses, locate vulnerable/rogue devices or assess the security policies applicable to these devices. Hence identifying the devices connected to the network is essential as it provides important insights about the devices that enable proper application of security measures and improve the efficiency of device troubleshooting. Despite the fact that active device fingerprinting reveals in depth information about devices, passive device fingerprinting has gained focus as a consequence of the lack of cooperation of devices in active fingerprinting. We propose a passive, feature based device identification technique that extracts features from a sequence of packets during the initial startup of a device and then uses machine learning for classification. Proposed system improves the average device prediction F1-score up to 0.912 which is a 14% increase compared with the state-of-the-art technique. In addition, We have analyzed the impact of confidence threshold on device prediction accuracy when a previously unknown device is detected by the classifier. As future work we suggest a feature-based approach to detect anomalies in devices by comparing long-term device behaviors.
  • Kontio, Heikki (2019)
    The number of IoT sensors is steadily rising in the world. As they're being used more and more in industries and households, the importance of proper lifecycle management (LCM) is increasing. Failure detection and security management are essential to be able to manage the large number of devices. In this thesis a number of platforms are evaluated, on the basis of meeting the expectations of LCM. The evaluation was done via a gap analysis. The categories for the analysis were: tools for estimating the remaining useful lifetime for sensors, API availability for LCM, failure detection and security management. Based on the gap analysis a list of recommendations is given in order to fill the gaps: - universal, platform-independent tools for estimating the remaining useful lifetime (RUL) - update APIs to widely used scalable and extensible architectural style REST - platform-independent standard for sensors reporting health status - industry-standard detection methods available for all
  • Li, Chunxiang (2014)
    Multiple Sequence Alignment (MSA) is one of the essential methods in molecular biology. The accuracy of MSAs effects downstream analyses such as phylogenetic inference, protein structure prediction, and function prediction. Because of the importance of MSA, current methods try to search for the optimal alignment with different objective functions and heuristics. As a result, different methods perform differently on various tasks. For example, the sequence alignments from methods designed based on structure homology are likely to mislead the comparative and phylogenetic analyses, since these downstream analyses require alignments correctly represent the evolutionary homology. The phylogeny-aware alignment method PRANK by Löytynoja and Goldman has been demonstrated to have good performance in aligning protein-coding genes for the evolutionary and comparative analyses. One of the reasons is that the phylogenetic information is also considered during the alignment in order to distinguish insertions from deletions. It has become the method of choice in the comparative sequence analyses and for instance, in recently published tiger genome study, PRANK was used to align the orthologous genes. However, there are still some issues that need to be resolved in PRANK. First of all, it can be sensitive to errors in the guide phylogenetic tree and bias the resulting alignment. Second, the single-threaded design does not allow PRANK to take advantage of modern CPU architecture or computer clusters, which is a disadvantage when working with large volumes of data. In this thesis, iPRANK, an iterative alignment tool to PRANK, will be introduced. The proposed tool is able to utilize multiple cores and make alignment faster via a divide-and-merge approach which splits the data set into subsets according to a guide tree and then runs PRANK simultaneously on each subset. The iteration of alignment and tree inference allows to search for a good tree and get rid of errors in the initial guide tree to improve the resulting alignment. In this way, iPRANK can estimate the tree and the multiple sequence alignment simultaneously for a set of unaligned sequences. In addition to improved alignment of single data set, the developed tool is also capable of inferring phylogeny from data sets consisting of multiple genes via gene concatenation strategy. By performing extensive studies on a set of simulation data, it will demostrate that our developed tool can run PRANK on a large computer cluster and produce improved alignments and phylogenetic trees compared to other approaches.