Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Issue Date

Sort by: Order: Results:

  • Säde, Solja (2021)
    Photocatalytic reactions utilize energy harnessed from light for the activation of a catalyst. In photoredox catalysis, an excited photocatalyst can take part in redox reactions with a substrate. The most common photocatalysts could be divided into three classes: metal catalysts, organic dyes, and heterogeneous semiconductors. These catalysts are often employed with a transition metal dual catalyst. The dual catalyst enables the cross-coupling of substrates, and the photocatalyst oxidizes or reduces the dual catalyst. Photocatalytic reactions can offer a milder alternative for the traditional C-N coupling reactions. In the literature review section, the photocatalytic N-arylation of pyrrolidines was examined. The review found that pyrrolidines were successfully N-arylated with all of the catalyst types, and multiple variations on the substituents on the aryl halide. In the majority of the research, electron withdrawing groups (EWG) as substituents enhanced product yields, but electron donating groups (EDG) decreased yields. In an organic dye catalysed reaction, the effects of the substituents were opposite. In addition, the photocatalytic reactions were compared with traditional C-N coupling reactions, such as the Buchwald-Hartwig reaction, Ullmann-type reactions nucleophilic aromatic substitution and the Chan-Lam reaction. These reactions often had harsh reaction conditions. The photocatalytic N-arylation of 3-substituted pyrrolidines was examined in the experimental part of this thesis. The objectives of this study were to investigate the use of photoredox methodologies for the C-N coupling of 3-substituted pyrrolidines to arenes and examine the scope and limitations of the reaction and the effects of substituents. In addition, the aim was to optimize the reaction conditions for multiple parameters and for each product separately, apply the reaction on a flow chemistry appliance, and execute scale-up reactions on both photoreactors. The study found 3-substituted pyrrolidines to be successfully coupled with aryl halides with great variation in the substituents of both starting materials. With optimization, the reactions with lower product yields were able to be improved significantly. The reaction was successfully upscaled, but the adaptation on the flow reactor requires further optimization. Photocatalytic C-N coupling reactions offer a promising alternative for traditional reactions.
  • Mäki-Iso, Emma (2021)
    Sijoitusten markkinariskin suuruutta tarkastellaan usein riskimittojen avulla. Riskimitta on kuvaus mahdollisia tappioita kuvaavien satunnaismuuttujien joukosta reaalilukuihin. Riski- mittojen avulla erilaisten sijotusten riskillisyyttä pystytään vertailemaan helposti. Pankki- valvojat hyödyntävät riskimittoja pankkien vakavaraisuuden valvonnassa. Pisimpään ylei- sessä käytössä on ollut VaR (Value-at-Risk) niminen riskimitta. VaR kertoo suurimman tappion, joka koetaan jollain asetetulla luottamustasolla α eli se on tappiojakauman α- kvantiili. Baselin uusimassa ohjeistuksessa (Minimum capital requirements of market risk) odotettu vaje niminen riskimitta korvaa VaR-riskimitan pääomavaateen laskennassa. Odotet- tu vaje kertoo, mikä on tappion odotusarvo silloin, kun tappio on suurempi kuin VaR- riskimitan antama luku. Riskimittaa ollaan vaihtamassa, koska VaR ei ole teoreettisilta ominaisuuksiltaan yhtä hyvä kuin odotettu vaje. Tämä johtuu siitä, että VaR ei ole sub- additiivinen, mikä tarkoittaa sitä, että positioiden yhteenlaskettu riski voi olla joissain ta- pauksissa suurempi kuin yksittäisten positioiden riskien summa. Tämä johtaa siihen, että hajauttamattoman sijoitussalkun riski voi olla pienempi kuin hajautetun. Odotettu vaje-riskimitta ei kuitenkaan ole täysin ongelmaton, koska se ei ole konsistentisti pisteytyvä, mikä tarkoittaa, että sille ei ole olemassa pisteytysfunktiota, jonka avulla voi- taisiin verrata estimoituja ja toteutuneita arvoja konsistentisti. Lisäksi se, että odotetun vajeen suuruus riippuu kaikista häntään jäävistä tappioista, tekee siitä herkän hännässä olevien tappioiden virheille. Tämä ei ole kovin hyvä ominaisuus, koska tappiojakaumien häntien estimointiin liittyy paljon epävarmuutta. Koska riskien estimointiin liittyy epävarmuutta, sääntely velvoittaa pankkeja toteumates- taamaan regulatiivisen pääomavaateen laskennassa käytettyjä riskiestimaatteja. Toteuma- testaamisella tarkoitetaan prosessia, jossa estimoituja riskilukuja verrataan toteutuneisiin tappioihin. VaR-estimaattien toteumatestaus perustuu niiden päivien lukumäärälle tes- tausjaksolla, joina tappio ylittää VaR-estimaatin antaman tappiotason. Odotetulle vajeelle ei ole vielä olemassa yhtä vakiintuneita toteumatestausmenetelmiä kuin VaR-estimaateille. Tässä tutkielmassa esitellään kolme erilaista tapaa toteumatestata odotettu vaje estimaat- teja, nämä tavat esittelivät Kratz kollegoineen, Moldenhauer ja Pitera sekä Costanzino ja Curran. Menetelmissä tarkastellaan useamman VaR-tason yhtäaikaisia ylityksiä, suojatun position eli tappion ja riskiestimaatin erotuksen positiiviseen lukuun kumuloitu- vasti summautuvien havaintojen määrää ja VaR-ylityksien keskimääräistä suuruutta. Tutkielman laskennallisessa osuudessa tutkittiin antavatko VaR- ja odotettu vaje- toteumatestit samanlaisia tuloksia ja vaikuttaako riskin estimointiin käytetyn havainto- jakson pituus estimaattien suoriutumiseen toteumatesteissä. Laskelmissa havaittiin, että odotettu vaje- ja VaR-toteumatestit antoivat samanlaisia tuloksia. Markkinadatasta eri kokoisilla estimointi-ikkunoilla lasketut estimaatit saivat toteumatestissä erikokoisia tes- tisuureiden arvoja, ja hyväksyivät väärän mallin tai hylkäsivät oikean malli eri todennä- köisyyksillä. Kun käytettiin puhtaasti simuloitua dataa, eri kokoisilla estimointi-ikkunoilla laskettujen estimaattien tuloksissa ei ollut eroja. Näin voidaan päätellä, että testitulosten erot eri mittaisilla havaintojaksoilla laskettujen estimaattien välillä eivät johdu pelkästään havaintojen määrästä vaan myös laadusta.
  • Sallasmaa, Christa (2021)
    The topic of this thesis is participatory budgeting and its connection to the discussion between neoliberalism and participatory governance in the context of city development. Helsinki started its own model of participatory budgeting in 2018 and has pledged to continue the concept in the future. I examine whether Helsinki’s participatory budgeting has the potential to support the ideologies of neoliberalism or participatory governance. In practice, I am exploring the views from the city government and active members of Helsinki’s neighborhood associations. Neighborhood associations had a significant role in the original participatory budgeting of Porto Alegre. I used interview and qualitative survey to collect my data. Neoliberalism has influenced the inequality between regions and the so-called crisis of democracy. Direct involvement of citizens is seen as a solution to these problems. Neoliberalism and participation have a paradoxical relationship: they have received similar criticism. In participatory governance participation means deliberative decision-making based on exchange of knowledge, but in neoliberalism participation can be a rhetoric tool to cover up actual decision-making or a city branding technique. Porto Alegre’s original model of participatory budgeting is seen as a part of participatory governance, but many of the international models seem to be more compatible with neoliberal ideology. The city government has not reserved enough resources to the participatory budgeting. The execution was rushed and showed signs of rationalization. According to the interview and the qualitative survey, inequality between regions might be the downfall of Helsinki’s participatory model. The active members of neighborhood associations see the benefits of participation budgeting but only from the perspective of certain regions. Currently, Helsinki’s participatory budgeting works better as a branding technique than as a method of decision-making. It seems to be more compatible with neoliberalism than participatory governance.
  • Kauppala, Juuso (2021)
    The rapidly increasing global energy demand has led to the necessity of finding sustainable alternatives for energy production. Fusion power is seen as a promising candidate for efficient and environmentally friendly energy production. One of the main challenges in the development of fusion power plants is finding suitable materials for the plasma-facing components in the fusion reactor. The plasma-facing components must endure extreme environments with high heat fluxes and exposure to highly energetic ions and neutral particles. So far the most promising materials for the plasma-facing components are tungsten (W) and tungsten-based alloys. A promising class of materials for the plasma-facing components is high-entropy alloys. Many high-entropy alloys have been shown to exhibit high resistance to radiation and other wanted properties for many industrial and high-energy applications. In materials research, both experimental and computational methods can be used to study the materials’ properties and characteristics. Computational methods can be either quantum mechanical calculations, that produce accurate results while being computationally extremely heavy, or more efficient atomistic simulations such as classical molecular dynamics simulations. In molecular dynamics simulations, interatomic potentials are used to describe the interactions between particles and are often analytical functions that can be fitted to the properties of the material. Instead of fixed functional forms, interatomic potentials based on machine learning methods have also been developed. One such framework is the Gaussian approximation potential, which uses Gaussian process regression to estimate the energies of the simulation system. In this thesis, the current state of fusion reactor development and the research of high-entropy alloys is presented and an overview of the interatomic potentials is given. Gaussian approximation potentials for WMoTa concentrated alloys are developed using different number of sparse training points. A detailed description of the training database is given and the potentials are validated. The developed potentials are shown to give physically reasonable results in terms of certain bulk and surface properties and could be used in atomistic simulations.
  • Zhu, Yangming (2021)
    C-H bonds are abundantly present in organic compounds and therefore represent large class of targets for activation in modern synthetic chemistry. Starting from simple and usually inexpensive compounds, direct activation of C-H bonds provides atom efficient (low waste generation) access to highly functionalized products with high added value. One of the most desirable subclass of C-H bond functionalization is its transformation to C-B bond (borylation), as organoboron compounds are important and widely used building blocks in organic synthesis, in particularly pharmaceuticals, agricultural chemicals and organic materials. Traditionally, transition metal-based catalysts have been used for C-H borylation. Recently, interest has grown towards metal-free approaches. This thesis is focused on the development of metal-free Csp2-H borylation of arenes by coupling two main concepts: borenium cations and Frustrated Lewis Pairs (FLPs). Borenium cations are positively charged boron species possessing two σ-bound substituents, and the third coordination site occupied by a ligand (L) bound through coordinative dative interaction. Due to relative stability ensured by donor ligand and enhanced reactivity owing to unsaturated coordinate sphere and positive charge, chemistry of boreniums attracted considerable attention. FLPs comprise separated (intermolecular) or bound within one molecule (intramolecular) Lewis acidic and Lewis basic components, which are prevented from formation of classical Lewis adduct due to steric repulsion. Since FLPs posses unquenched reactivity they are capable to cleave heterotically σ and π chemical bonds, including C-H bonds. The method showed in the present work implies cooperative actions of 2-aminopyridinyl-borenium based FLPs, comprising borenium cation as LA component, and bulky aminopyridine ligand as LB component to borylate aromatic Csp2-H bonds. In this approach, LA serves as a reagent itself (source of boron), while LB (ligand), which abstract proton upon C-H bond cleavage, can be fully recovered from the reaction mixture. Thus, this approach offers high atom efficiency and low waste generation. We achieved borylation of electron-rich thiophenes, furans, and pyrroles under ambient conditions. Further we dedicated our efforts to improve efficiency and economical aspect of the proposed method.
  • Lintuluoto, Adelina Eleonora (2021)
    At the Compact Muon Solenoid (CMS) experiment at CERN (European Organization for Nuclear Research), the building blocks of the Universe are investigated by analysing the observed final-state particles resulting from high-energy proton-proton collisions. However, direct detection of final-state quarks and gluons is not possible due to a phenomenon known as colour confinement. Instead, event properties with a close correspondence with their distributions are studied. These event properties are known as jets. Jets are central to particle physics analysis and our understanding of them, and hence of our Universe, is dependent upon our ability to accurately measure their energy. Unfortunately, current detector technology is imprecise, necessitating downstream correction of measurement discrepancies. To achieve this, the CMS experiment employs a sequential multi-step jet calibration process. The process is performed several times per year, and more often during periods of data collection. Automating the jet calibration would increase the efficiency of the CMS experiment. By automating the code execution, the workflow could be performed independently of the analyst. This in turn, would speed up the analysis and reduce analyst workload. In addition, automation facilitates higher levels of reproducibility. In this thesis, a novel method for automating the derivation of jet energy corrections from simulation is presented. To achieve automation, the methodology utilises declarative programming. The analyst is simply required to express what should be executed, and no longer needs to determine how to execute it. To successfully automate the computation of jet energy corrections, it is necessary to capture detailed information concerning both the computational steps and the computational environment. The former is achieved with a computational workflow, and the latter using container technology. This allows a portable and scalable workflow to be achieved, which is easy to maintain and compare to previous runs. The results of this thesis strongly suggest that capturing complex experimental particle physics analyses with declarative workflow languages is both achievable and advantageous. The productivity of the analyst was improved, and reproducibility facilitated. However, the method is not without its challenges. Declarative programming requires the analyst to think differently about the problem at hand. As a result there are some sociological challenges to methodological uptake. However, once the extensive benefits are understood, we anticipate widespread adoption of this approach.
  • Lauha, Patrik (2021)
    Automatic bird sound recognition has been studied by computer scientists since late 1990s. Various techniques have been exploited, but no general method, that could even nearly match the performance of a human expert, has been developed yet. In this thesis, the subject is approached by reviewing alternative methods for cross-correlation as a similarity measure between two signals in template-based bird sound recognition models. Template-specific binary classification models are fit with different methods and their performance is compared. The contemplated methods are template averaging and procession before applying cross-correlation, use of texture features as additional predictors, and feature extraction through transfer learning with convolutional neural networks. It is shown that the classification performance of template-specific models can be improved by template refinement and utilizing neural networks’ ability to automatically extract relevant features from bird sound spectrograms.
  • Riihimäki, Katariina (2021)
    The mafic-ultramafic Kevitsa intrusion, located within the Central Lapland Greenstone Belt in Northern Finland, hosts a disseminated Ni-Cu-PGE deposit. Drillhole KVX018 penetrates through the intrusion, intersecting its bottom contact at 1772 meters and is associated with relatively low resistivity at the bottom of the intrusion. The KVX018 drillhole is the deepest drilled into the intrusion so far and the observed low resistivity zone is unique for the study area. Previous studies have shown the bottom contact of the Kevitsa intrusion to be associated with seismic reflections and possible mineralization. This paper studies the characteristics of the bottom contact of the Kevitsa intrusion from the drill core KVX018 and interprets the origin of the low resistivity and its relationship with mineralogy. From geochemical and petrophysical characteristics, four layers with different characteristics were observed within the studied section: footwall, contact zone, lower cumulates and upper cumulates. The lower cumulates were found to be strongly contaminated by elements associated to hydrothermal fluids from country rocks. The contamination was observed for 125 meters upwards from the basal contact as elevated concentrations of e.g. lithium, lanthanum, rubidium and potassium, and footwall rocks close to the contact were found to be depleted in these elements. The contact zone was found to be strongly altered by silicification and albitization. Hydrothermal fluid activity at the bottom contact was also observed by epidote alteration of plagioclase feldspar. Contact zone mineralization was observed and it was found to be false ore type with Ni tenor of 2.28 %. Upwards from contact mineralization, the mineralization was found first to change into local low-grade Ni-PGE ore and then into normal ore on top part of the studied drill core section. Ultramafic intrusive rocks were observed to be pervasively altered by amphibole alteration locally into a degree where in many rocks, alteration had overprinted the primary mineralogy and textures to be undistinguishable. Alteration intensity was found to increase downwards within the lowermost part of the intrusion. Salt minerals were observed by eye on the surface of some samples and by X-Ray Diffraction in one sample. XRD studies indicated nitratine and sylvite minerals present in the studied sample. These salt minerals are presented commonly in evaporites and their presence indicates an evaporitic source. Resistivity of rocks is generally affected by e.g. sulfide content, salinity, porosity and alteration. Resistivity and chargeability were found to be correlative, indicating resistivity to correlate also with presence of sulfide minerals. However, after depth of 680 meters, resistivity decreases without a correlating trend in other petrophysical properties. This paper concludes that the observed low resistivity is resulted from a presence of salt and sulfide minerals as well as alteration intensity.
  • Rawlings, Alexander (2021)
    This thesis presents the results from seventeen collisionless merger simulations of massive early-type galaxies in an effort to understand the coalescence of supermassive black holes (SMBHs) in the context of the Final Parsec Problem. A review of the properties of massive early-type galaxies and their SMBHs is presented alongside a discussion on SMBH binary coalescence to motivate the initial conditions used in the simulations. The effects of varying SMBH mass and stellar density profiles in the progenitor initial conditions on SMBH coalescence was investigated. Differing mass resolutions between the stellar particles and the SMBHs for each physical realisation were also tested. The simulations were performed on the supercomputers Puhti and Mahti at CSC, the Finnish IT Centre for Science. SMBH coalescence was found to only occur in mergers involving SMBH binaries of equal mass, with the most rapid coalescence observed in galaxies with a steep density profile. In particular, the eccentricity of the SMBH binary was observed to be crucial for coalescence: all simulations that coalesced displayed an orbital eccentricity in excess of e=0.7 for the majority of the time for which the binary was bound. Simulations of higher mass resolution were found to have an increased number of stellar particles able to positively interact with the SMBH binary to remove orbital energy and angular momentum, driving the binary to coalescence. The gravitational wave emission from an equal mass SMBH binary in the final stages before merging was calculated to be within the detection limits required for measurement by pulsar timing arrays. Mergers between galaxies of unequal mass SMBHs were unable to undergo coalescence irrespective of mass resolution or progenitor density profile, despite the binary in some of these simulations displaying a high orbital eccentricity. It was determined that the stellar particles interacting with the SMBH binary were unable to remove the required orbital energy and angular momentum to bring the SMBHs to within the separation required for efficient gravitational wave emission. A trend between increasing mass resolution and increasing number of stellar particles able to remove energy from the SMBH binary was observed across all the simulation suites. This observation is of paramount importance, as three-body interactions are essential in removing orbital energy and angular momentum from the SMBH binary, thus overcoming the Final Parsec Problem. As such, it is concluded that the Final Parsec Problem is a numerical artefact arising from insufficient mass resolution between the stellar particles and the SMBHs rather than a physical phenomenon.
  • Bäckroos, Sami (2021)
    High pressure inside e.g. blood vessels or other biological cavities is a major risk factor for many preventable diseases. Most of the measuring methods require physical contact or other kinds of projected forces. Both variants can be unpleasant for the patient and additionally physical contact might warrant for either continuous disinfecting or single-use probes, depending on the measurement method and the target body part. We have been experimenting with handheld non-contacting pressure measuring devices based on acoustic waves. These excite mechanical waves, whose velocity varies with pressure, on the surface of a biological cavity. The tried excitation methods are nearly unnoticeable for the patient, allowing for more pleasant and waste free measurements. Using the data from the latest clinical trial, a new analysis algorithm was devised to improve the accuracy of the pressure estimates. Instead of the time-of-flight (TOF) of the main mechanical wave (MMW), the new algorithm estimates the pressure using the MMW and a previously unseen feature, improving the R^2 from 0.60 to 0.72.
  • Turtio, Panu (2021)
    Työn tavoitteena on tutkia, miten voidaan tuottaa lukiokurssi primitiivistä juurista. Primitiivistä juurista ei ole ennalta materiaalia lukiotasolle, joten työssä joudutaankehittämään metodi yliopistotason materiaalin muuntamiselle lukiotasolle. Työssä esitetään ja todistetaan lukuteorian lauseita. Nämä lauseet on valikoitu sellaisiksi, että ne ovat vähin mitä tarvitaan primitiivisten juurten käsittelyyn. Tämän lisäksi työssä esitellään Diffie-Hellman-avaintenvaihtoprotokolla ja murtamiseen käytettävä Square and multiply - algoritmi. Työssä tuotetaan lukuteorian lukiokurssi primitiivisistä juurista pohjautuen työssä läpikäytyyn materiaaliin. Lukiokurssi tuotetaan vertailemalla analyysin yliopiston ja lukion oppimateriaalien eroavaisuuksia. Näistä eroavaisuuksista pyritään analysoimaan säännönmukaisuuksia, millä yliopis-tontason materiaali voidaan muuntaa lukio-opetukseen sopivaksi. Yliopisto- ja lukiotasoisten oppimateriaalien eroavaisuuksiksi havaittiin sisällön rajaus, matemaattisten merkkien muuntaminen kirjalliseksi kieleksi, opetettavan sisällön järjestys ja painotus todistuksiin yliopistossa sekä painotus esimerkkeihin lukiossa. Nämä havainnot huomioon ottaen, työn matematiikkaosion lauseista muunnettiin lukioympäristöön sopiva kokonaisuus. Tämä kokonaisuus on riittävä pohja lukiokurssin pitämiseen näistä aiheista ja sisältää myös opetuksen aikataulutuksen.
  • Pakkanen, Noora (2021)
    In Finland, the final disposal of spent nuclear fuel will start in the 2020s where spent nuclear fuel will be disposed 400-450 meters deep into the crystalline bedrock. Disposal will follow Swedish KBS-3 principle where spent nuclear fuel canisters will be protected by multiple barriers, which have been planned to prevent radionuclides´ migration to the surrounding biosphere. With multiple barriers, failure of one barrier will not endanger the isolation of spent nuclear fuel. Insoluble spent nuclear fuel will be stored in ironcopper canisters and placed in vertical tunnels within bedrock. Iron-copper canisters are surrounded with bentonite buffer to protect them from groundwater and from movements of the bedrock. MX-80 bentonite has been proposed to be used as a bentonite buffer in Finnish spent nuclear fuel repository. In a case of canister failure, bentonite buffer is expected to absorb and retain radionuclides originating from the spent nuclear fuel. If salinity of Olkiluoto island´s groundwater would decrease, chemical erosion of bentonite buffer could result in a generation of small particles called colloids. Under suitable conditions, these colloids could act as potential carriers for immobile radionuclides and transport them outside of facility area to the surrounding biosphere. Object of this thesis work was to study the effect of MX-80 bentonite colloids on radionuclide migration within two granitic drill core columns (VGN and KGG) by using two different radionuclides 134Cs and 85Sr. Batch type sorption and desorption experiments were conducted to gain information of sorption mechanisms of two radionuclides as well as of sorption competition between MX-80 bentonite colloids and crushed VGN rock. Colloids were characterized with scanning electron microscopy (SEM) and particle concentrations were determined with dynamic light scattering (DLS). Allard water mixed with MX-80 bentonite powder was used to imitate groundwater conditions of low salinity and colloids. Strontium´s breakthrough from VGN drill core column was found to be successful, whereas caesium did not breakthrough from VGN nor KGG columns. Caesium´s sorption showed more irreversible nature than strontium and was thus retained strongly within both columns. With both radionuclides, presence of colloids did not seem to enhance radionuclide´s migration notably. Breakthrough from columns was affected by both radionuclide properties and colloid filtration within tubes, stagnant pools and fractures. Experiments could be further complemented by conducting batch type sorption experiments with crushed KGG and by introducing new factors to column experiments. The experimental work was carried out at the Department of Chemistry, Radiochemistry in the University of Helsinki.
  • Lehtonen, Leevi (2021)
    Sex differences can be found in most human phenotypes, and they play an important role in human health and disease. Females and males have different sex chromosomes, which are known to cause sex differences, as are differences in the concentration of sex hormones such as testosterone, estradiol and progesterone. However, the role of the autosomes has remained more debated. The primary aim of this thesis is to assess the magnitude and relevance of human sex-specific genetic architecture in the autosomes. This is done by calculating sex-specific heritability estimates and genetic correlation estimates between females and males, as well as comparing these to sex differences on the phenotype level. Additionally, the heritability and genetic correlation estimates are compared between two populations, in order to assess the magnitude of sex differences compared to differences between populations. The analyses in this thesis are based on sex-stratified genome-wide association study (GWAS) data from 48 phenotypes in the UK Biobank (UKB), which contains genotype data from approximately 500 000 individuals as well as thousands of phenotype measurements. A replication of the analyses using three phenotypes was also made on data from the FinnGen project, with a dataset from approximately 175 000 individuals. The 48 phenotypes used in this study range from biomarkers such as serum testosterone and albumin levels to general traits such as height and blood pressure. The heritability and genetic correlation estimates were calculated using linkage disequilibrium score regression (LDSC). LDSC fits a linear regression model between test statistic values of GWAS variants and linkage disequilibrium (LD) scores calculated from a reference population. For most phenotypes, the heritability and genetic correlation results show little evidence of sex differences. Serum testosterone level and waist-to-hip ratio are exceptions to this, showing strong evidence of sex differences both on the genetic and the phenotype level. However, the overall correlation between phenotype level sex differences and sex differences in heritability or genetic correlation estimates is low. The replication in the FinnGen dataset for height, weight and body mass index (BMI), showed that for these traits the differences in heritability estimates and genetic correlations between the Finnish and UK populations are comparable or larger than the differences found between males and females.
  • Mesimäki, Johannes (2021)
    Collisions and near accidents between pedestrians and cyclists can result in serious injuries and death but have received limited academic attention. Using an online survey, this thesis aimed to increase knowledge of such events, assess the sense of safety of pedestrians and cyclists in traffic as well as identify safety-related constraints to the uptake of walking and cycling with practice theory. Practice theory considers human behaviour to be guided via participation in established social practices constituted by interconnected elements of meaning, material and competence. As such, this thesis contributes to debates concerning barriers to walking and cycling from a safety perspective. The survey was directed to Finnish cities with over 100,000 population and asked frequent pedestrians and cyclists to report details of collisions and near accidents between pedestrians and cyclists that they had experienced in the previous three years. Additionally, the survey asked questions concerning respondents’ sense of safety in traffic when walking or cycling. Survey data was analysed with chi-square tests of independence and ordinal logistic regression. Constraints to the uptake of cycling and walking and ways to overcome them were identified with a practice theory analysis. This involved examining the implications of survey results for the elements constituting the practices, their interrelations and how the practices influenced each other. According to the results, near accidents are roughly 50 times more frequent than collisions. Only 16 respondents had experienced a collision, whereas roughly a third had experienced at least one near accident. Additionally, shared paths were associated with more collisions and near accidents compared to separated spaces, and respondents felt less safe and less willing to travel on them compared to separated paths. The most common type of collision and near accident involved both road users travelling in the same direction. Constraints to cycling and walking were found to surface from meanings of danger associated particularly with shared infrastructure, a material element of the practices. These issues are evidenced by a high near accident frequency, low sense of safety and low willingness to travel on shared spaces. In addition, these issues were exacerbated by a lack of competences concerning space sharing, resulting in poor rapport and respect between pedestrians and cyclists. Significant effects regarding sense of safety were detected between pedestrians and cyclists and across age and genders with ordinal logistic regression, suggesting variance in how different groups experience meanings of danger. Intervening in the material element of the practices by preferring the provision of spatially separated infrastructure was considered to have potential to help overcome these constraints due to their associated safety benefits and respondents’ more favourable position toward them. In addition, working to develop a shared code of conduct for travel on shared environments could further mitigate constraints. Overcoming these constraints could assist the promotion of active travel and help improve the sustainability of transport while improving traffic safety and increasing physical activity.
  • Lindström, Olli-Pekka (2021)
    Until recently, database management systems focused on the relational model, in which data are organized into tables with columns and rows. Relational databases are known for the widely standardized Structured Query Language (SQL), transaction processing, and strict data schema. However, with the introduction of Big Data, relational databases became too heavy for some use cases. In response, NoSQL databases were developed. The four best-known categories of NoSQL databases are key-value, document, column family, and graph databases. NoSQL databases impose fewer data consistency control measures to make processing more efficient. NoSQL databases haven’t replaced SQL databases in the industry. Many legacy applications still use SQL databases, and newer applications also often require the more strict and secure data processing of SQL databases. This is where the idea of SQL and NoSQL integration comes in. There are two mainstream approaches to combine the benefits of SQL and NoSQL databases:multi-model databases and polyglot persistence. Multi-model databases are database management systems that store and process data in multiple different data models under the same engine. Polyglot persistence refers to the principle of building a system architecture that uses different kinds of database engines to store data. Systems implementing the polyglot persistence principle are called polystores. This thesis introduces SQL and NoSQL databases and their two main integration strategies: multi-model databases and polyglot persistence. Some representative multi-model databases and polystores are introduced. In conclusion, some challenges and future research directions for multi-model databases and polyglot persistence are introduced and discussed.
  • Koivisto, Teemu (2021)
    Programming courses often receive large quantities of program code submissions to exercises which, due to their large number, are graded and students provided feedback automatically. Teachers might never review these submissions therefore losing a valuable source of insight into student programming patterns. This thesis researches how these submissions could be reviewed efficiently using a software system, and a prototype, CodeClusters, was developed as an additional contribution of this thesis. CodeClusters' design goals are to allow the exploration of the submissions and specifically finding higher-level patterns that could be used to provide feedback to students. Its main features are full-text search and n-grams similarity detection model that can be used to cluster the submissions. Design science research is applied to evaluate CodeClusters' design and to guide the next iteration of the artifact and qualitative analysis, namely thematic synthesis, to evaluate the problem context as well as the ideas of using software for reviewing and providing clustered feedback. The used study method was interviews conducted with teachers who had experience teaching programming courses. Teachers were intrigued by the ability to review submitted student code and to provide more tailored feedback to students. The system, while still a prototype, is considered worthwhile to experiment on programming courses. A tool for analyzing and exploring submissions seems important to enable teachers to better understand how students have solved the exercises. Providing additional feedback can be beneficial to students, yet the feedback should be valuable and the students incentivized to read it.
  • Penttinen, Jussi (2021)
    HMC is a computational method build to efficiently sample from a high dimensional distribution. Sampling from a distribution is typically a statistical problem and hence a lot of works concerning Hamiltonian Monte Carlo are written in the mathematical language of probability theory, which perhaps is not ideally suited for HMC, since HMC is at its core differential geometry. The purpose of this text is to present the differential geometric tool's needed in HMC and then methodically build the algorithm itself. Since there is a great introductory book to smooth manifolds by Lee and not wanting to completely copy Lee's work from his book, some basic knowledge of differential geometry is left for the reader. Similarly, the author being more comfortable with notions of differential geometry, and to cut down the length of this text, most theorems connected to measure and probability theory are omitted from this work. The first chapter is an introductory chapter that goes through the bare minimum of measure theory needed to motivate Hamiltonian Monte Carlo. Bulk of this text is in the second and third chapter. The second chapter presents the concepts of differential geometry needed to understand the abstract build of Hamiltonian Monte Carlo. Those familiar with differential geometry can possibly skip the second chapter, even though it might be worth while to at least flip through it to fill in on the notations used in this text. The third chapter is the core of this text. There the algorithm is methodically built using the groundwork laid in previous chapters. The most important part and the theoretical heart of the algorithm is presented here in the sections discussing the lift of the target measure. The fourth chapter provides brief practical insight to implementing HMC and also discusses quickly how HMC is currently being improved.
  • Suominen, Henri (2021)
    Online hypothesis testing occurs in many branches of science. Most notably it is of use when there are too many hypotheses to test with traditional multiple hypothesis testing or when the hypotheses are created one-by-one. When testing multiple hypotheses one-by-one, the order in which the hypotheses are tested often has great influence to the power of the procedure. In this thesis we investigate the applicability of reinforcement learning tools to solve the exploration – exploitation problem that often arises in online hypothesis testing. We show that a common reinforcement learning tool, Thompson sampling, can be used to gain a modest amount of power using a method for online hypothesis testing called alpha-investing. Finally we examine the size of this effect using both synthetic data and a practical case involving simulated data studying urban pollution. We found that, by choosing the order of tested hypothesis with Thompson sampling, the power of alpha investing is improved. The level of improvement depends on the assumptions that the experimenter is willing to make and their validity. In a practical situation the presented procedure rejected up to 6.8 percentage points more hypotheses than testing the hypotheses in a random order.
  • Törnroos, Topi (2021)
    Application Performance Management (APM) is a growing field, and APM tools on the market tend to be complex enterprise solutions with features ranging from traffic analysis and error reporting to real- user monitoring and business transaction management. This thesis is a study done on behalf of Veikkaus Oy, a Finnish government-owned game company and betting agency. It serves as a look into the current state-of-the-art field of leading APM tools as well as a requirements analysis done from the perspective of the company’s IT personnel. A list of requirements was gathered and scored based on perceived importance, and four APM tools on the market—Datadog APM, Dynatrace, New Relic and AppDynamics—were each compared to each other and scored based on the gathered requirements. In addition, open-source alternatives were considered and investigated. Our results suggest that the leading APM vendors have products very similar to each other with marginal differences between them, feature-wise. In general, APMs were deemed useful and valuable to the company, able to assist in the work of a wide variety of IT personnel, as well as able to replace many tools currently in use by Veikkaus Oy and simplify their application ecosystem.
  • Vainio, Marko (2021)
    When developing an application using a microservice architecture the application consists of multiple distributed independent and loosely coupled services. These services then communicate with each other through a network in order to form a functioning application. Benefits of developing an application as a set of independent services as opposed to a single monolithic application are numerous. The services may be developed and deployed independently, which enables, for example, the usage of different programming language for a specific service. Services designed for specific tasks are also usually relatively small in size and as such easier to develop, understand and test. The challenges of building an application utilising a microservice architecture as opposed to the traditional monolithic one include identifying suitable functionalities that can be extracted into a service. Also, end-to-end testing of the extracted functionality becomes challenging. Throughout this thesis the most important benefits and challenges of the microservice architecture are investigated with a literature review as well as in practice with a case study. During the case study a specific functionality in a largely monolithic application was transformed into a microservice. The benefits and challenges that became evident during the process are covered in the thesis.