Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Hansson, Kristian (2019)
    Reunalaskennan tarkoituksena on siirtää tiedonkäsittelyä lähemmäs tiedon lähdettä, sillä keskitettyjen palvelinten laskentakyky ei riitä tulevaisuudessa kaiken tiedon samanaikaiseen analysointiin. Esineiden internet on yksi reunalaskennan käyttötapauksista. Reunalaskennan järjestelmät ovat melko monimutkaisia ja vaativat yhä enemmän ketterien DevOps-käytäntöjen soveltamista. Näiden käytäntöjen toteuttamiseen on löydettävä sopivia teknologioita. Ensimmäiseksi tutkimuskysymykseksi asetettiin: Millaisia teknisiä ratkaisuja reunalaskennan sovellusten toimittamiseen on sovellettu? Tähän vastattiin tarkastelemalla teollisuuden, eli pilvipalveluntarjoajien ratkaisuja. Teknisistä ratkaisuista paljastui, että reunalaskennan sovellusten toimittamisen välineenä käytetään joko kontteja tai pakattuja hakemistoja. Reunan ja palvelimen väliseen kommunikointiin hyödynnettiin kevyitä tietoliikenneprotokollia tai VPN-yhteyttä. Kirjallisuuskatsauksessa konttiklusterit todettiin mahdolliseksi hallinnoinnin välineeksi reunalaskennassa. Ensimmäisen tutkimuskysymyksen tuloksista johdettiin toinen tutkimuskysymys: Voiko Docker Swarmia hyödyntää reunalaskennan sovellusten operoinnissa? Kysymykseen vastattiin empiirisellä tapaustutkimuksella. Keskitetty reunalaskennan sovellusten toimittamisen prosessi rakennettiin Docker Swarm -konttiklusteriohjelmistoa, pilvipalvelimia ja Raspberry Pi -korttitietokoneita hyödyntäen. Toimittamisen lisäksi huomioitiin ohjelmistojen suorituksenaikainen valvonta, edellisen ohjelmistoversion palautus, klusterin laitteiden ryhmittäminen, fyysisten lisälaitteiden liittäminen ja erilaisten suoritinarkkitehtuurien mahdollisuus. Tulokset osoittivat, että Docker Swarmia voidaan hyödyntää sellaisenaan reunalaskennan ohjelmistojen hallinnointiin. Docker Swarm soveltuu toimittamiseen, valvontaan, edellisen version palauttamiseen ja ryhmittämiseen. Lisäksi sen avulla voi luoda samaa ohjelmistoa suorittavia klustereita, jotka koostuvat arkkitehtuuriltaan erilaisista suorittimista. Docker Swarm osoittautui kuitenkin sopimattomaksi reunalaitteeseen kytkettyjen lisälaitteiden ohjaamiseen. Teollisuuden tarjoamien reunalaskennan ratkaisujen runsas määrä osoitti laajaa kiinnostusta konttien käytännön soveltamiseen. Tämän tutkimuksen perusteella erityisesti konttiklusterit osoittautuivat lupaavaksi teknologiaksi reunalaskennan sovellusten hallinnointiin. Lisänäytön saamiseksi on tarpeen tehdä laajempia empiirisiä jatkotutkimuksia samankaltaisia puitteita käyttäen.
  • Harhio, Säde (2022)
    The importance of software architecture design decisions has been known for almost 20 years. Knowledge vaporisation is a problem in many projects, especially in the current fast-paced culture, where developers often switch from project to another. Documenting software architecture design decisions helps developers understand the software better and make informed decisions in the future. However, documenting architecture design decisions is highly undervalued. It does not create any revenue in itself, and it is often the disliked and therefore neglected part of the job. This literature review explores what methods, tools and practices are being suggested in the scientific literature, as well as, what practitioners are recommending within the grey literature. What makes these methods good or bad is also investigated. The review covers the past five years and 36 analysed papers. The evidence gathered shows that most of the scientific literature concentrates on developing tools to aid the documentation process. Twelve out of nineteen grey literature papers concentrate on Architecture Decision Records (ADR). ADRs are small template files, which as a collection describe the architecture of the entire system. The ADRs appear to be what practitioners have become used to using over the past decade, as they were first introduced in 2011. What is seen as beneficial in a method or tool is low-cost and low-effort, while producing concise, good quality content. What is seen as a drawback is high-cost, high-effort and producing too much or badly organised content. The suitability of a method or tool depends on the project itself and its requirements.
  • Linnoinen, Krista (2013)
    Mathematics teaching has been an active field of research and development at the Department of Mathematics and Systems Analysis at Aalto University. This research has been motivated by a desire to increase the number of students that pass compulsory basic mathematics courses without compromising on standards. The courses aim to provide the engineering students with the mathematical skills needed in their degree programmes so it is essential that a proper foundation is laid. Since 2006, a web-based automated assessment system called STACK has been used on basic mathematics courses for supplementary exercises to aid learning at Aalto University. In this thesis, computer-aided mathematics teaching and, in particular, automated assessment are studied to investigate what effect attempting to solve online exercises has on mathematical proficiency. This is done by using a Granger causality test. For this, the first two of three basic courses are examined. The concepts relating to learning and computer-aided mathematics teaching as well as the developments, including Mumie, made at Aalto University are first presented. Then, the statistical methodology, the theoretical framework and the test procedure for Granger causality are described. The courses and data, which was collected from STACK and used to quantify mathematical proficiency for the Granger causality test, are then reviewed. Finally, the results and implications are presented. The Granger causality tests show that there exists a Granger-causal relationship such that mathematical proficiency affects the desire to attempt to solve exercises. This holds for both of the interpretations used for quantifying mathematical profiency and all variations of the penalty deducted for incorrect attempts. The results imply that the exercises are too difficult for the students and that students tend to give up quickly. Thus, the Granger causality tests produced statistically significant results to back up what teachers have always known: students are discouraged by failure, but encouraged by success. The results provide teachers with valuable information about the students' abilities and enable teachers to alter the teaching accordingly to better support the students' learning.
  • Forsman, Pauliina (2023)
    The green transition is necessary in mitigating climate change. However, it is not a problem-free development pathway from global justice and social sustainability point of views, as the manufacturing of green technologies require great amounts of minerals from the developing countries. Competition for mineral natural resources is creating growing pressure to increase mining activities, which in many countries involves environmental and human rights issues. This is feared to cause environmental destruction, and inhumane working and living conditions for the people in the mining areas, creating new global inequalities. To avoid this trajectory, demands for a just green transition, in which the benefits and harms of energy systems would be more evenly distributed globally, have been presented. The political pressure to implement the green transition is great. Therefore, many actors worldwide have committed to various carbon neutrality goals and cities play a key role in this. By the decision of the majority of the city councilors, also the city of Helsinki has set an ambitious goal of being carbon neutral by 2030, which requires a fast implementation of the green transition. In this master's thesis, the discussion minutes of the Helsinki city council in the years 2019–2022 were studied with an interpretative approach using discourse analysis as a method. The purpose was to find out how the green transition is discussed in the council and which factors influence the perceptions of the green transition presented there. In addition, the purpose was to research whether the council discussions propose any solutions to solve the challenges of global injustice connected to the green transition or whether those problems were recognized at all. As a result, three different discourses of unproblematic discourse, critical discourse, and must-do discourse were interpreted from the data. The unproblematic discourse viewed the green transition in a positive and/or neutral light, emphasizing the possibilities in climate change mitigation. Economic perspectives were also strongly present in this context. The identified critical discourse covered economic and social grievances related to the green transition, which were considered to be related to security of supply, economy, and ecological and social sustainability. In the third, i.e., the must-do discourse, the meaning of green technology was formed through the mitigation of climate change, which was seen threatening all life on Earth. In this view, global warming itself was seen as the greatest social and justice issue. Discourses and perceptions of Helsinki's green transition are strongly influenced by the city's way of focusing its emission calculations only on reducing the city's direct CO2 emissions. Thus, the social global effects caused by Helsinki's green transition cannot be verified with the city's current evaluation methods. Consequently, the councilors discuss the green transition from a strong local perspective.
  • Toivanen, Aleksi (2020)
    Selainten kehittyessä ohjelmointiympäristönä, keskeiseksi hahmonnuskohteeksi on vakiintunut dokumenttioliomalli. Dokumenttioliomalli mahdollistaa dokumentin tehokkaan hallinnoimisen, mutta toisaalta sen käyttö vaatii syvällistä tuntemusta, eivätkä kaikki sen tarjoamat ohjelmointirajapinnat ole vailla ongelmia – dokumenttioliomallia on helppo käyttää tehottomasti ja sen tietyt osat ovat virheherkkiä. Näihin haasteisiin on vastattu tarjoamalla helpommin hallittavia, korkeamman tason ohjelmointiympäristöjä ja -rajapintoja. Dokumenttioliomalliin liittyvien haasteiden keskeinen asema selainpohjaisessa sovelluskehityksessä on nähtävissä dokumenttioliomalliin hahmontavien sovelluskehysten ja kirjastojen suuressa määrässä. Ratkaisut ja lähestymistavat dokumenttioliomallin haasteisiin vaihtelevat suuresti. Tässä työssä selvitetään miten hahmontavat sovelluskehykset, kirjastot ja kielet lähestyvät dokumenttioliomalliin hahmontamiseen liittyviä ongelmia, ja miten paljon nämä ratkaisut ovat sidottuja juuri dokumenttioliomalliin hahmontamiskohteena. Työssä käsitellään seitsemää analysoitua dokumenttioliomalliin hahmontavaa sovelluskehystä, kirjastoa ja kieltä, jotka on valikoitu niiden käyttämien erilaisten lähestymistapojen ja menetelmien perusteella. Ne pyrkivät abstrahoimaan kehittäjän pääsyä todelliseen dokumenttioliomalliin, näin estäen yleisimpiä virhetilanteita, ja piilottaen hahmontamistaan. Työ avaa analysoitujen sovelluskehysten ja kirjastojen toimintaa, ja kuvaa niiden tekemiä keskeisiä ratkaisuja. Yksittäisten sovelluskehysten analyysistä ei kuitenkaan suoraan pysty yleistämään dokumenttioliomalliin hahmontavien sovelluskehysten toiminnasta analyysin ulkopuolisten sovelluskehysten osalta, mutta tehty analyysi antaa viitteitä vallalla olevista yleisemmistä suuntauksista. Sovelluskehyksien analyyseistä on löydettävissä yhteisiä piirteitä. Analyysi osoittaa dokumenttioliomalliin kohdistuvan optimoidun hahmontamisen ratkaisujen runsauden - samaan lopputulokseen on mahdollista päästä monella eri tavalla, menetelmien tarjotessa toisistaan eroavia vahvuuksia.
  • Stefańska, Marta (2023)
    In this work, molecular mass determination by diffusion-ordered nuclear magnetic resonance spectroscopy was obtained for a series of poly(2-oxazoline)s, polypeptoids and poly(2-oxazine)s. The samples included linear, star like and cyclized homopolymers and block copolymers. The data was calibrated against polyethylene glycol, polystyrene and poly(methyl methacrylate) standards. The results were compared with those obtained by matrix-assisted laser desorption/ionization spectrometry, size exclusion chromatography, rolling-ball viscometry and end-group analyses based on proton nuclear magnetic resonance. It was concluded that in general diffusion-ordered spectroscopy tends to give a very accurate estimation of the masses up to 30 kg/mol in deuterated water and dimethyl sulfoxide, especially after viscosity correction. In addition, nuclear magnetic resonance spectroscopy provides a wealth of information about the samples including their structure and possible impurities. In summary, this methodology could be successfully applied to different polymers and it is invaluable in the case of absence of the standards with similar solubility to analyzed polymers since the viscosity correction enables a comparison of the results measured in different solvents.
  • Rautsola, Iiro (2019)
    Multimodality imaging is an efficient, non-invasive method for investigation of molecular and cellular processes in vivo. However, the potential of multimodality imaging in plant studies is yet to be fully realized, largely due to the lack of research into suitable molecular tracers and instrumentation. Iodine has PET- and SPECT-compatible radioisotopes that have significant advantages over other radioisotopes applied in plant radioisotope imaging, and can be incorporated into small molecules via a variety of reactions. In this master’s thesis, a radioiodination method exploiting a novel, Dowex® H+-mediated addition of iodine for terminal alkynes was optimized and tested on two D-glucose analogues. The goal of the sugar analogue radioiodination was to develop a radioiodinated molecular tracer for plant carbohydrate metabolism studies. The parameters under optimization were activation Dowex® by HCl, reaction temperature, carrier amount, solvent, and evaporation of excess water. The most optimal results were achieved under the following conditions: Dowex® HCl-activated, reaction temperature 95 °C, amount of carrier 3.0 µmol of carrier, cyclohexanol as solvent, and excess water evaporated. The Dowex® approach was compared to electrophilic reactions with Chloramine T and Iodogen, and it was concluded that the Dowex® approach leads to superior radiochemical yields under the optimized conditions. The Dowex® method was successfully tested on the sugar analogues, resulting in a single main product at a satisfactory 50 – 56 % radiochemical yield. The main products were successfully characterized with NMR, and in addition the method was indicated to be regioselective. It is plausible that the developed method may be improved further in terms of radiochemical yield and molar activity, and that the method could prove to be a useful tool for developing novel radiodinated molecular tracers for plant studies.
  • Hunnakko, Joel (2023)
    An optical data bus is a promising solution to provide a fast data transmission from room temperature to quantum devices at low temperatures, which would minimize the heat load into the cryogenic system compared to the conventional electrical cabling. Previously, a similar measurement setup was used to drive a Josephson junction array (JJA) with an optical pulse pattern generated with a mode-locked laser (MLL). A photodiode (PD) was used to convert optical signals to photocurrent signals to drive the JJA at low temperature. There was a long and non-ideal transmission line between the PD and the JJA at their operation temperature of 4 K. The experiments and simulations revealed that the non-idealities in the transmission line caused non-desired reflections. In this work the PD is integrated on a same chip as the JJA. The new on-chip integration provides shorter transmission line with less interfaces. The new compact transmission line promises less electrical signal reflections between the PD and the JJA. A custom-made MLL emitted an original optical single pulse to the optical pulse pair generator. The MLL was operated at a well-defined pulse frequency of 2.3 GHz to produce the single pulses in the desired frequency. An optical time delay circuit (OTD) was applied to the generated pulses in the MLL to time divide the pulses on the desired time delays. The generated optical pulse pair pattern was transmitted from room temperature to the PD in 4 K via an optical polarization maintaining fiber. The fiber was integrated on the top of the PD in 4 K, which was used to drive the JJA sample. The PD was biased with a reverse voltage and the JJA sample with a current. The amplification of optical twin pulses was varied during the measurements. We measured the DC voltage of the JJA sample and the DC photocurrent of the PD simultaneously. The measurement was repeated with several different manually defined optical time delayed twin pulses. The work included optimization of the optical setup. The optimization involved setting the reflective diffraction grating to the optimized position, which was used to filter away undesired wavelengths. The optical pulse pair method used in this work can be used to investigate the maximum speed of the data signals.
  • Sirkiä, Mika (2020)
    Transparent displays are common. Electroluminescent displays are one example of a transparent display. This focuses on driving electronics for transparent electroluminescent segmented displays. The designed electronics is discussed and its performance is measured. A new driving method is presented. This method increases the luminance of the display and decreases its power consumption.
  • Mäkinen, Simo (2012)
    Test-driven development is a software development method where programmers compose program code by first implementing a set of small-scale tests which help in the design of the system and in the verification of associated code sections. The reversed design and implementation process is unique: traditionally there is no attempt to verify program code that does not yet exist. Applying practices of test-driven design to a software development process-a generally complex activity involving distinct individuals working in an organization-might have an impact not only on the process itself but on the outcome of the process as well. In order to assess whether test-driven development has perceivable effects on elements of software development, a qualitative literature survey, based on empirical studies and experiments in the industry and academia, was performed. The aggregated results extracted from the studies and experiments on eleven different internal and external process, product and resource quality attributes indicate that there are positive, neutral and negative effects. Empirical evidence from the industry implies that test-driven development has a positive, reducing, effect on the number of defects detected in a program. There is also a chance that the code products are smaller, simpler and less complex than equivalent code products implemented without test-driven practices. While additional research is needed, it would seem that the test-driven produced code is easier for the developers to maintain later, too; on average, maintenance duties took less time and the developers felt more comfortable with the code. The effects on product attributes of coupling and cohesion, which describe the relationships between program code components, are neutral. Increased quality occasionally results in better impressions of the product when the test-driven conform better to the end-user tests but there are times when end-users cannot discern the differences in quality between products made with different development methods. The small, unit-level, tests written by the developers increase the overall size of code products since most of the program code statements are covered by the tests if a test-driven process is followed. Writing tests takes time and the negative effects are associated with the effort required in the process. Industrial case studies see negative implications to productivity due to the extra effort but student experiments have not always been able to replicate similar results under controlled conditions.
  • Perola, Eero (2023)
    Driving speeds regardless of vehicle type are a part of almost everyone’s daily lives. The subject has been widely studied and many algorithms for determining optimal routes exist. A novel data source for this type of research is GPS-collected Floating Car Data. As positioning enabled devices have become increasingly abundant, the collection of huge amounts of data with locations, speeds and directions has become vastly more common. In this master’s thesis, I examine a type of Big Data -set of car speeds within the Helsinki area through three different viewpoints. First, I examine the driving patterns described by the distribution of data on different kinds of roads and time periods. Second, I focus on one variable, intersection density, and determine the effect it has on the change in speed and whether it is possible to conduct statistical analysis for the data. Last, I analyze the steps needed to take in order to fully utilize the variables of the data within the road network system. The results indicate that while there are clear differences in changing speed within road classes, the differences are not as clearly described by road class as they are by speed limit. Also, time of day has a clear effect where times of congestion are distinguishable. While among all road classes the mean driven speed is below the speed limit, on larger roads the mode is above the speed limit. I prove that it is possible to find numerous variables that depict speed change through novel Floating Car Data. Focusing on intersection density, the result is that at highest, within the Helsinki area, intersection density represents around eight per cent of change in speed compared to speed limit. As a final result, a method to viably use linear Floating Car Data to research intersection density and its effects is developed. As a mediate step and a side result, a workflow of modifying road network layers into segments between intersections is produced.
  • Ibadov, Rustam (2023)
    Poly(2-oxazoline)/poly(2-oxazine)-based block copolymers have gained significant attention in recent years for their potential use in drug delivery systems. The architecture of amphiphilic poly(2-oxazoline)/poly(2-oxazine) based block copolymers, consisting of hydrophilic outer blocks and a hydrophobic inner block, allows the formation of micelles. The hydrophobic drug is encapsulated within the core, and the hydrophilic shell provides the stability and solubility in aqueous solution. The size and properties of the micelles can be tuned by adjusting the composition of the copolymer, making them a versatile platform for drug delivery. In this work, three different poly(2-oxazoline)/poly(2-oxazine)-based triblock, diblock and gradient copolymers were synthesized via cationic ring-opening polymerization and compared in terms of their drug formulation capability. Triblock copolymers consisting of three polymer blocks, can be tailored to have different hydrophobic and hydrophilic block ratios, allowing for tunable drug release profiles. However, triblock copolymers are more difficult to synthesize, especially if one aims to produce symmetrical ratio of hydrophilic blocks. Diblock copolymers, consisting of two polymer blocks, can also self-assemble into micelles in aqueous solutions and can encapsulate hydrophobic drugs, however, the lower stability of their formulations compared to that of triblock copolymers can limit their drug loading capacity and drug release profiles. In theory, entropy wise, when forming a micelle, the diblock copolymer should be favorable as it doesn’t need to fold, unlike the triblock copolymers, however, the drug formulations by triblock copolymers has shown to be more stable than that of diblock copolymers. Thus, more detailed analysis is needed since the lack of literature on the systematic comparison of these different architectures. Gradient copolymers, consisting of two or more types of monomers that are incorporated into a polymer chain with a gradually changing composition, have more variable properties and are easier to synthesize through one step, than block copolymers. This makes their usage in drug formulation very attractive. However, depending on the reactivity of monomers added, the resulting product can be very different, thus, the kinetics of the copolymerization deserves an attention of the study as well.
  • Kuosmanen, Teemu (2020)
    Cancer is a dynamic and complex microevolutionary process. All attempts of curing cancer thus rely on successfully controlling also the evolving future cancer cell population. Since the emergence of drug resistance severely limits the success of many anti-cancer therapies, especially in the case of the promising targeted therapies, we need urgently better ways of controlling cancer evolution with our treatments to avoid resistance. This thesis characterizes acquired drug resistance as an evolutionary rescue and uses optimal control theory to critically investigate the rationale of aggressive maximum tolerated dose (MTD) therapies that represent the standard of care for first line treatment. Unlike the previous models of drug resistance, which mainly concentrate on minimizing the tumor volume, herein the optimal control problem is reformulated to explicitly minimize the probability of evolutionary rescue, or equivalently, maximizing the extinction probability of the cancer cells. Furthermore, I investigate the effects of drug-induced resistance, where the rate of gaining new resistant cells increases with the dose due to increased genome-wide mutation rate and non-genetic adaptations (such as epigenetic regulation and phenotypic plasticity). This approach not only reflects the biological realism, but also allows to model the cost of control in a quantifiable manner instead of using some ambiguous and incomparable penalty parameter for the cost of treatment. The major finding presented in this thesis is that MTD-style therapies may actually increase the likelihood of an evolutionary rescue even when only modest drug-induced effects are present. This suggests that significant improvements to treatment outcomes may be accomplished at least in some cases by treatment optimization. The resistance promoting properties of different anti-cancer therapies should therefore be properly investigated in experimental and clinical settings.
  • Kostiainen, Nikke (2017)
    Peliäänet ovat pelaajalle tarkoitettuja ääniä, joita peliohjelmisto toistaa suorituksen aikana. Työssä tarkastellaan ja arvioidaan erilaisten peliäänien kehityksen piirteitä ja vertaillaan mahdollisia ratkaisumenetelmiä keskenään. Työssä hyödynnettiin kirjallisia lähteitä ja työn osana toteutettiin yksinkertainen peliäänimoottori. Moottorin käytön helppoutta eri skenaarioissa arvioitiin sanallisesti. Vertailu toteutusmentelmien välillä suoritettiin käyttäen lähteistä löytyvää tietoa, sekä kokemuspohjaa olemassa olevien ratkaisujen käytöstä ja peliäänimoottorin toteutuksesta. Vertailussa todettiin eri ratkaisumenetelmien sopivan tiettyyn pelinkehitystilanteisiin paremmin ja toisiin huonommin. Itse kehitettävät pelimoottoriratkaisut sopivat hyvin tilanteisiin, joissa kehitettävä alusta on suorituskyvyltään rajattu tai peliäänimoottorin vaatimukset edellyttävät toimintoja, joita olemassa olevissa ratkaisuissa ei ole. Vastaavasti olemassaolevat ratkaisut sopivat hyvin suurempiin projekteihin, joissa peliäänien avulla tavoitellaan realistisuutta
  • Dajie, Wang (2017)
    This thesis summarises the adaptive bitrate streaming technology called Dynamic Adaptive Streaming over HTTP, also named as MPEG-DASH, as it is developed by the Moving Picture Experts Group. The thesis introduces and summarises MPEG-DASH standard, including the content of the standard, Profiles from MPEG and DASH Industry Forum, and an evaluation of the standard. The thesis then analyses the MPEG-DASH system and provides related research papers. It is organized into three different parts based on the workflow of the whole system, including the hosted Media Presentation Description file and video Segments in server, network infrastructures and DASH clients. In the end, the thesis discusses about the adoptions of the MPEG-DASH system in different industries, including Broadband, Broadcast, Mobile and 3D.
  • Kangas, Pinja (2022)
    Sulfuric acid has a central role in atmospheric chemistry, as it is considered to have a significant contribution in cloud formation and acid rain. In the gas phase, hydrolysis of SO3 catalysed by a single water molecule is contemplated to be the primary pathway to form sulfuric acid in the atmosphere. However, in previous studies it has been calculated that when the hydrolysis reaction is catalysed by a formic acid (FA) molecule, the potential energy barrier is significantly lower than for the water molecule catalysed reaction. In this work, the role of dynamic and steric effects for both reactions were studied through ab initio molecular dynamics (AIMD) collision simulations. The simulations were done by either colliding FA or a water molecule with SO3-H2O complex or a water dimer with the SO3-molecule. Altogether 230 trajectories were calculated at PBE/6-311+G(2pd,2df) level of theory, 70 for the collision of a water dimer and SO3, and 80 for both the collision of a water molecule or FA with SO3-H2O. The collision of FA with SO3-H2O led to the formation of sulfuric acid in 5 % of the simulations, whereas for the collision of a water molecule with SO3-H2O the reaction does not occur within the simulation time. Additionally, the SO3-H2O-FA pre-reactive complex formed in the simulations is shown to be more stable, most likely due to a less constrained ring structure. The collision of a water dimer with SO3 most commonly leads to the formation of SO3-H2O, and either sticking or evading of the second water molecule of the dimer. Based on the simulation results, strictly in terms of dynamic and steric effects, the FA-catalysed mechanism seems to be favored over the H2O-catalysed one
  • Joensuu, Juhana (2022)
    Currency risk is an important yet neglected consideration for investors holding internationally diversified investment portfolios. The foreign exchange market is an extremely liquid and efficient market, with daily transaction volumes exceeding the equivalent of several trillion euros. International investors have to decide upon the level of exposure on various currency risks typically by hedging some or all of the underlying currency exposure with currency derivative contracts. The currency overlay refers to an approach where the aggregate currency exposure from the investment portfolio is managed with a separate derivatives strategy, aimed at improving the overall portfolio’s risk adjusted returns. In this thesis, we develop a novel systematic, data-driven approach to manage the currency risk of investors holding diversified bond-equity portfolios, accounting for both risk minimization and expected returns maximization objectives on the portfolio level. The model is based upon modern portfolio theory, leveraging findings from prior literature in covariance modelling and expected currency returns. The focus of this thesis is in ensuring efficient risk diversification through the use of accurate covariance estimates fed by high-frequency data on exchange rates, bonds and equity indexes. As to the expected returns estimate, we identify purchasing power parity (PPP) and carry signals as credible alternatives to improve the expected risk-adjusted returns of the strategy. A block bootstrap simulation methodology is used to conduct empirical tests on different specifications of the developed dynamic overlay model. We find that dynamic risk-minimizing strategies significantly decrease portfolio risk relative to either unhedged or fully hedged portfolios. Using high-freqency data based returns covariance estimates is likely to improve portfolio diversification relative to a simple daily data-based estimator. The empirical results are much less clear in terms of risk adjusted returns. We find tentative evidence that the tested dynamic strategies improve risk adjusted returns. Due to the limited data sample used in this study, the findings regarding expected returns are highly uncertain. Nevertheless, considering evidence from prior research covering much longer time-horizons, we expect that both the risk-minimizing and returns maximizing components of the developed model are likely to improve portfolio-level risk adjusted returns. We recommend using the developed model as an input to support the currency risk management decision for investors with globally diversified investment portfolios, along with other relevant considerations such as solvency or discretionary market views.
  • Nyman, Thomas (2014)
    Operating System-level Virtualization is virtualization technology based on running multiple isolated userspace instances, commonly referred to as containers, on top of a single operating system kernel. The fundamental difference compared to traditional virtualization is that the targets of virtualization in OS-level virtualization are kernel resources, not hardware. OS-level virtualization is used to implement Bring Your Own Device (BYOD) policies on contemporary mobile platforms. Current commercial BYOD solutions, however, don't allow for applications to be containerized dynamically upon user request. The ability to do so would greatly improve the flexibility and usability of such schemes. In this work we study if existing OS-level virtualization features in the Linux kernel can meet the needs of use cases reliant on such dynamic isolation. We present the design and implementation of a prototype which allows applications in dynamic isolated domains to be migrated from one device to another. Our design fits together with security features in the Linux kernel, allowing the security policy influenced by user decisions to be migrated along with the application. The deployability of the design is improved by basing the solution on functionality already available in the mainline Linux kernel. Our evaluation shows that the OS-level virtualization features in the Linux kernel indeed allow applications to be isolated in a dynamic fashion, although known gaps in the compartmentalization of kernel resources require trade-offs between the security and interoperability to be made in the design of such containers.
  • Bankowski, Victor (2021)
    WebAssembly (WASM) is a binary instruction format for a stack-based virtual machine originally designed for the Web but also capable of being run on outside of the browser contexts. The WASM binary format is designed to be fast to transfer, load and execute. WASM programs are designed to be safe to execute by running them in a memory safe sandboxed environment. Combining dynamic linking with WebAssembly could allow the creation of adaptive modular applications that are cross-platform and sandboxed but still fast to execute and load. This thesis explores implementing dynamic linking in WebAssembly. Two artifacts are presented: a dynamic linking runtime prototype which exposes a POSIX-like host function interface for modules and an Android GUI interfacing prototype built on top of the runtime. In addition the results of measurements which were performed on both artefacts are presented. Dynamic linking does improve the memory usage and the startup time of applications when only some modules are needed. However if all modules are needed immediately then dynamic linked applications. perform worse than statically linked applications. Based on the results, dynamically linking WebAssembly modules could be a viable technology for PC and Android. The poor performance of A Raspberry Pi in the measurements indicates that dynamic linking might not be viable for resource contrained system especially if applications are performance critical.
  • Nurminen, Niilo Waltteri (2021)
    Phase transitions in the early Universe and in condensed matter physics are active fields of research. During these transitions, objects such as topological solitons and defects are produced by the breaking of symmetry. Studying such objects more thoroughly could shed light on some of the modern problems in cosmology such as baryogenesis and explain many aspects in materials research. One example of such topological solitons are the (1+1) dimensional kinks and their respective higher dimensional domain walls. The dynamics of kink collisions are complicated and very sensitive to initial conditions. Making accurate predictions within such a system has proven to be difficult, and research has been conducted since the 70s. Especially difficult is predicting the location of resonance windows and giving a proper theoretical explanation for such a structure. Deeper understanding of these objects is interesting in its own right but can also bring insight in predicting their possibly generated cosmological signatures. In this thesis we have summarized the common field theoretic tools and methods for the analytic treatment of kinks. Homotopy theory and its applications are also covered in the context of classifying topological solitons and defects. We present our numerical simulation scheme and results on kink-antikink and kink-impurity collisions in the $\phi^4$ model. Kink-antikink pair production from a wobbling kink is also studied, in which case we found that the separation velocity of the produced kink-antikink pair is directly correlated with the excitation amplitude of the wobbling kink. Direct annihilation of the produced pair was also observed. We modify the $\phi^4$ model by adding a small linear term $\delta \phi^3$, which modifies the kinks into accelerating bubble walls. The collision dynamics and pair production of these objects are explored with the same simulation methods. We observe multiple new effects in kink-antikink collisions, such as potentially perpetual bouncing and faster bion formation in comparison to the $\phi^4$ model. We also showed that the $\delta$ term defines the preferred vacuum by inevitably annihilating any kink-antikink pair. During pair production we noticed a momentum transfer between the produced bion and the original kink and that direct annihilation seems unlikely in such processes. For wobbling kink - impurity collisions we found an asymmetric spectral wall. Future research prospects and potential expansions for our analysis are also discussed.