Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Kangasniemi, Ilmari (2016)
    Coarse structures are an abstract construction describing the behavior of a space at a large distance. In this thesis, a variety of existing results on coarse structures are presented, with the main focus being coarse embeddability into Hilbert spaces. The end goal is to present a hierarchy of three coarse invariants, namely coarse embeddability into a Hilbert space, a property of metric spaces known as Property A, and a finite-valued asymptotic dimension. After outlining the necessary prerequisites and notation, the first main part of the thesis is an introduction to the basics of coarse geometry. Coarse structures are defined, and it is shown how a metric induces a coarse structure. Coarse maps, equivalences and embeddings are defined, and some of their basic properties are presented. Alongside this, comparisons are made to both topology and uniform topology, and results related to metrizability of coarse spaces are outlined. Once the basics of coarse structures have been presented, the focus shifts to coarse embeddability into Hilbert spaces, which has become a point of interest due to its applications to several unsolved conjectures. Two concepts are presented related to coarse embeddability into Hilbert spaces, the first one being Property A. It is shown that Property A implies coarse embeddability into a Hilbert space, and that it is a coarse invariant. The second main concept related to coarse embeddability is asymptotic dimension. Asymptotic dimension is a coarse counterpart to the Lebesgue dimension of topological spaces. Various definitions of asymptotic dimension are given and shown equivalent. The coarse invariance of asymptotic dimension is shown, and the dimensions of several example spaces are derived. Finally, it is shown that a finite asymptotic dimension implies coarse embeddability into a Hilbert space, and in the case of spaces with bounded geometry it also implies Property A.
  • Teräväinen, Joni Petteri (2014)
    Komparatiivinen alkulukuteoria tutkii alkulukujen jakaumaa eri jäännösluokkiin ja erityisesti jakauman vääristymiä. Keskeinen tutkimuksen kohde on joukko P_{ q; a_1, ... , a_r } = { x ≥ 2 : π (x; q ,a_1 )> ... > π (x; q, a_r)}, missä π (x; q, a) laskee alkulukujen muotoa qn+a määrän lukuun x asti ja jäännösluokat a_i ovat yhteistekijättömiä moduluksen q ≥ 2 kanssa. P. Tšhebyšhov huomasi jo vuonna 1853, että lähes aina π (x; 4,3) on suurempi kuin π (x; 4,1), vaikka alkulukulauseen aritmeettisissa jonoissa mukaan nämä ovat asymptoottisesti yhtä suuret. Myös muissa moduluksissa nähdään sama ilmiö: Osassa jäännösluokista on useimmiten enemmän alkulukuja rajaan x asti kuin toisissa. Tämän havainnon muotoileminen ei kuitenkaan ole triviaalia, sillä joukoilla P_{q ;a_1, ... ,a_r} ei aina ole asymptoottista tiheyttä. Vuonna 1994 M. Rubinstein ja P. Sarnak tekivät läpimurron joukkojen P_{q; a_1, ... ,a_r} tutkimisessa osoittamalla, että niiden logaritmiset tiheydet ovat positiivisia, mikäli oletetaan kaksi yleisesti uskottua hypoteesia. Joukon A logaritminen tiheys on δ(A) = lim_{X→ ∞}\frac{1}{log X}∈t_{2}^X \frac{dt}{t}, kun raja-arvo on olemassa. Rubinsteinin ja Sarnakin oletukset ovat yleistetty Riemannin hypoteesi ja hypoteesi Dirichlet'n L-funktioiden nollakohtien lineaarisesta riippumattomuudesta rationaalilukujen yli. Ilman näitä oletuksia Rubinsteinin ja Sarnakin tuloksia ei ole todistettu. Tässä pro gradu -tutkielmassa todistetaan Rubinsteinin ja Sarnakin artikkelin tuloksia yksityiskohtaisesti. Artikkelissa ja tässä tutkielmassa osoitetaan olettaen samat konjektuurit, että δ(P_{a,b})>\frac{1}{2} jos ja vain jos a on neliönepäjäännös ja b neliönjäännös (mod q). Tämä ehto määrittää siis kaikki tapaukset, joissa alkulukuja qn+a määrän voi sanoa olevan yleensä suurempi kuin alkulukujen qn+b johonkin rajaan asti. Lisäksi osoitetaan, että δ(P_{q; a, b, c}) = \frac{1}{6} tietyissä tapauksissa, joiden uskotaan olevan ainoat mahdolliset. Rubinstein ja Sarnak osoittivat myös, että moduluksen kasvaessa alkulukujen kilpailut tasaantuvat moduluksen kasvaessa eli δ(P_{q; a_1, ... ,a_r}) → \frac{1}{r!}, kun q→ ∞. Tässä tutkielmassa todistetaan vastaava väite neliönjäännös- ja neliönepäjäännösalkulukujen väliselle vertailulle; tämä on myös mainitussa artikkelissa. Edellä mainittujen lauseiden todistusta varten johdetaan Rubinsteinin ja Sarnakin tulokaava alkulukujen vertailuun liittyvän mitan Fourier-muunnokselle. Yhdessä eksplisiittisen kaavan ja oletusten nojalla tämä mahdollistaa mitan ominaisuuksien hallitsemisen. Lopuksi arvioidaan edellä mainitun kaltaisten mittojen vähenemisnopeutta. Luvussa 1 esitetään historiaa ja motivaatiota. Luvussa 2 todistetaan klassinen eksplisiittinen kaava funktioon π (x; q,a) läheisesti liittyvälle funktiolle. Luku 3 kertaa mittateorian tuloksia, joita käytetään apuna pääluvussa 4.
  • Tolvanen, Tuuli (2020)
    The objective of this thesis is to introduce the concept of compound variables and explain their use in one application specifically, as the total claim amount of an insurance company can be viewed as a compound variable. We study both the average behaviour as well as the tail behaviour of compound variables. Before delving into the results concerning the tails of compound variables, we aim to present an overview about the general theory and treat the average behaviour of compound variables first. We familiarize the reader with rudimentary concepts such as moment and cumulant generating functions. Along the way, the reader will also gain an understanding of both mixed variables as well as compound mixed variables. We state and prove some fundamental results concerning the expectation, variance and moment generating functions of compound variables. When the concept of compound variable is used to interpret the total claim amount, we also find the number of claims to be of interest. Since it is a random variable, we wish to be able to model it somehow. In the case of a general compound variable, the number of claims simply corresponds to the number of summands in the variable. We consider compound Poisson variables as a special case of compound variables. The reason for this is that if the counting variable or the number of claims variable is Poisson distributed, then the compound variable is a compound Poisson random variable. We also enhance the modelling of the number of claims by presenting mixing variables into the model. As a more general version for determining the expectation of a random sum we prove Wald's identity. It does not assume the independence of the counting variable and the increments in the same way we do in the definition of a compound variable. Towards the end, we shift the focus from general theory and average behaviour to tail behaviour of compound variables. We introduce the reader to the necessary classes of heavy-tailed and subexponential distributions to be able to formulate a few results that give an asymptotically equivalent approximation for the tail function of the compound variable. We prove the result for the case of the negative expectation of the increments (summands). We also present results for the case of non-negative expectation of the increments. Such a situation would be of interest in particular for total claim amounts, if we assume the claims being non-negative random variables.
  • Lehtonen, Ossi (2024)
    Machine learning (ML) and mobile applications are two major technology trends in the past decade. Arguably, developing a successful mobile application is difficult, but ML can help with the challenges. ML enables new capabilities for an application, but there are multiple obstacles to be passed before publishing a ML feature to production. One of them is the inconstancy in mobile devices' hardware. A developer might be able to achieve fast and accurate enough inference in their own sandbox, but actual end-users might observe a bad user experience (UX) using an application due to inefficient inference because their device hardware varies from the developer's setup. Cross-platform frameworks are intended to provide consistent UX across operating systems (OSs) with a single codebase, which in turn shortens the development time compared to creating individual native applications for different OSs. Even though applications built using a cross-platform framework are generally less performant than those built with native languages, they can achieve near-native performance in certain tasks through the use of native modules. This makes it interesting to combine ML inference and cross-platform development and to compare the inference capabilities of the currently most common cross-platform frameworks: React Native and Flutter. The results of this thesis indicate that running hardware-accelerated ML inference is possible in both frameworks using open-source libraries. And the inference is efficient when it comes to execution time, especially on newer devices. But to achieve generally good inference time and accuracy across different devices without sacrificing UX, one should most likely lean towards React Native.
  • Paloposki, Viljami (2021)
    Tämän maisterintutkielman tavoitteena on tutustuttaa lukija Dirichletin kuuluisaan todistukseen Fourier-sarjojen suppenevuudesta. Työssäni olen pyrkinyt säilyttämään Dirichletin todistuksen hengen käymällä todistusta läpi Dirichletin itsensä kirjoittamalla tavalla. Sellaiset osuudet todistuksesta jotka Dirichlet sivuutti olen yrittänyt käydä läpi niinkuin Dirichlet olisi ne voinut käydä. Työssä käsiteltävät Fourier-sarjat ovat trigonometrisiä sarjoja joiden avulla voidaan esittää tietyt ehdot täyttäviä funktiota. Todistuksen oleellinen osuus on siis osoittaa että suuri määrä funktio voidaan esittää trigonometristen sarjojen äärettömänä summana. Tutkielma alkaa ensimmäisen kappaleen tiivistelmällä. Toisessa kappaleessa kerrotaan todistuksen tausta historiasta, sen merkittävyydestä sekä sen jälkeisistä tuloksista. Tämän jälkeen käydään läpi todistuksessa tarvittavia lauseita, määritelmiä ja laskuja kappaleissa kolme ja neljä. Kappaleessa viisi lasketaan integraali nollasta äärettömään funktiolle sin(x)/x, mitä käytetään seuraavassa kappaleessa. Kuudennessa ja viimeisessä kappaleessa käydään lopulta läpi Dirichletin todistus. Se on jaettu kahteen osaan. Ensimmäisessä osassa käydään läpi lauseita ja lemmoja, jotka sitten kootaan yhdeksi. Toisessa osassa käydään päätodistus läpi hyödyntämällä ensimmäistä osaa. Hyvänä jatkotutkimuksen aiheena voisi olla Dirichletin todistuksen jälkeiset todistukset Fourier-sarjojen suppenemisesta. Dirichlet esitti omat riittävät ehtonsa Fourier-sarjojen suppenemiselle, mutta hänen jälkeensä on esitetty muitakin riittäviä ehtoja. Näiden kokoaminen ja vertailu voisivat olla varsin mielenkiintoinen tutkimuksen aihe.
  • Sillanpää, Ilkka (Helsingin yliopistoUniversity of HelsinkiHelsingfors universitet, 2002)
    The one-dimensional method of characteristics is a forward method for determining ionospheric currents from electric and magnetic field measurements. In this work the applicability of the method was studied with respect to polar electrojet and shear flow events as these are the predominant ionospheric current situations and are often one-dimensional, the fields thus having only dependence on the latitude. In this work the characteristic equations are derived from Maxwell's equations and Ohm's law. A program was developed with an algorithm applying the one-dimensional method of characteristics to ionospheric electric field measurements by the STARE radars and ground- based magnetic field measurements by the IMAGE magnetometer network. The magnetic field was upward continued to the ionospheric horizontal current altitude (100km). The applicability of the one-dimensional method of characteristics was shown by analyzing the results from three electric current events. The length of these events varied between 10 and 40 minutes and the study area was limited to STARE and IMAGE measurement area over Scandinavia and part of the Arctic Ocean. The results were accurate and relatively detailed and gave insight to e.g. the origin of the features of the field-aligned currents. The estimated ratio of the Hall and Pedersen conductances, or the alpha parameter, is needed in the method. It was shown that the alpha dependence follows the theoretical predictions, and thus the Hall conductance and the East-West component of the horizontal currents (the Hall current, that dominated the horizontal currents) have practically no dependence on alpha. Also the general features of the conductance and current profiles were not dependent of alpha. Field-aligned current (FAC) results obtained during one of the events were compared with concurrent Cluster satellite measurements at a high altitude orbit above the area of study. Two maxima and a minimum of FAC occurred simultaneously in the results with very comparable numerical values after the mapping down of the results from the satellite. The one-dimensional method of characteristics was found very successful in determining ionospheric conductances and currents in detail from ionospheric electric and magnetic field measurements when the assumption of the one-dimensionality of the event is valid. It seems quite feasible to develop the algorithm for application of the method during longer time periods, where as here only singular events were studied.
  • Rauhala, Timo (2013)
    The electric solar wind sail (E-Sail) is a space propulsion invention exploiting the dynamic pressure of the solar wind. It uses centrifugally stretched positively charged conductive tethers to create thrust from the momentum flux of the solar wind. In space the tethers must be micrometeoroid resistant. We use ultrasonic bonding to create an aluminum multiwire Heytether structure to address this issue. For this work we produced a 1 km continuous piece of multifilament E-Sail tether of µm-diameter aluminum wires using a custom made automatic tether factory. The tether comprising 90704 bonds between 25 and 50 µm diameter wires is reeled onto a metal reel. The total mass of 1 km tether is 10 g. We reached a production rate of 70 m/24 h and a quality level of 1 ‰ loose bonds and 2 ‰ rebonded ones. Thus, we demonstrated that production of long electric solar wind sail tethers is both possible and practical. Based on images captured of each bond, a post production analysis was done to determine failure rate and types of the failures that occured during the production.
  • Lohi, Heikki (2023)
    Stochastic homogenization consists of qualitative and quantitative homogenization. It studies the solutions of certain elliptic partial differential equations that exhibit rapid random oscillations in some heterogeneous physical system. Our aim is to homogenize these perturbations to some regular large-scale limiting function by utilizing particular corrector functions and homogenizing matrices. This thesis mainly considers elliptic qualitative homogenization and it is based on a research article by Scott Armstrong and Tuomo Kuusi. The purpose is to elaborate the topics presented there by viewing some other notable references in the literature of stochastic homogenization written throughout the years. An effort has been made to explain further details compared to the article, especially with respect to the proofs of some important results. Hopefully, this thesis can serve as an accessible introduction to the qualitative homogenization theory. In the first chapter, we will begin by establishing some notations and preliminaries, which will be utilized in the subsequent chapters. The second chapter considers the classical case, where every random coefficient field is assumed to be periodic. We will examine the general situation later that does not require periodicity. However, the periodic case still provides useful results and strategies for the general situation. Stochastic homogenization theory involves multiple random elements and hence, it heavily applies probability theory to the theory of partial differential equations. For this reason, the third chapter assembles the most important probability aspects and results that will be needed. Especially, the ergodic theorems for R^d and Z^d will play a central part later on. The fourth chapter introduces the general case, which does not require periodicity anymore. The only assumption needed for the random coefficient fields is stationarity, that is, the probability measure P is translation invariant with respect to translations in Zd. We will state and prove important results such as the homogenization for the Dirichlet problem and the qualitative homogenization theorem for stationary random coefficient fields. In the fifth chapter, we will briefly consider another approach to qualitative homogenization. This so-called variational approach was discovered in the 1970s and 1980s, when Ennio De Giorgi and Sergio Spagnolo alongside with Gianni Dal Maso and Luciano Modica studied qualitative homogenization. We will provide a second proof for the qualitative homogenization theorem that is based on their work. An additional assumption regarding the symmetricity of the random coefficient fields is needed. The last chapter is dedicated to the large-scale regularity theory of the solutions for the uniformly elliptic equations. We will concretely see the purpose of the stationarity assumption as it turns out that it guarantees much greater regularity properties compared to non-stationary coefficient fields. The study of large-scale regularity theory is very important, especially in the quantitative side of stochastic homogenization.
  • Li, Jichao (2024)
    Correlation functions in a superconformal field theory are strictly constrained by conformal symmetry. Notably, one-point functions of conformal operators always vanish. However, when a defect is inserted into the spacetime of the field theory, certain one-point functions become non-zero due to the broken conformal symmetry, highlighting the special properties of the defect. One interesting type of defect is the domain wall, which separates spacetime into two regions with distinct vacua. The domain wall version of $\mathcal{N}=4$ supersymmetric Yang-Mills (SYM) theory has been extensively studied in recent years. In this context, the supersymmetric domain wall preserves integrability, allowing one to evaluate one-point functions in the defect field theory using integrability techniques. As an analogous study of the domain wall version of $\mathcal{N}=4$ SYM theory, this thesis focuses on the ABJM theory with a 1/2-BPS domain wall, meaning that the domain wall preserves half the original supersymmetry. We first review integrability methods, e.g. the Coordinate Bethe ansatz and the Algebraic Bethe ansatz for $\mathfrak{su}(2)$ Heisenberg spin chain. The spectrum of the spin chain can be determined by solving sets of the Bethe equations. Moreover, the Rational $Q$-system is examined, which solves the Bethe equations efficiently and eliminates all nonphysical solutions automatically. On the field theory side, we first review the original ABJM theory and its spectral integrability following J. A. Minahan's work in 2009. There exists an underlying quantum $\mathfrak{su}(4)$ spin chain with alternating even and odd sites, whose Hamiltonian can be identified with the two-loop dilation operator of ABJM theory in the planar limit. This correspondence allows us to find the spectrum of ABJM theory using the Bethe ansatz. We study the $\mathfrak{su}(4)$ alternating spin chain and demonstrate the procedure for constructing eigenstates of ABJM theory. Finally, we study the tree-level one-point functions in the domain wall version of ABJM theory. We derive the classical solutions for the scalar fields that describe a domain wall and explicitly demonstrate how the domain wall preserves half of the supersymmetry. With these classical solutions, we define a domain wall version of ABJM theory. Then, we introduce the so-called Matrix Product State, which is a boundary state in the spin chain's Hilbert space. The domain wall can be identified with an integrable matrix product state, leading to a compact determinant formula for the one-point functions in spin chain language. Consequently, we can evaluate one-point functions explicitly using the Bethe ansatz and boundary integrability.
  • Debraise, Nora (2013)
    Aminodiene-based Diels-Alder reactions constitute attractive solutions for the construction of six-membered carbocyclic subunits of a number of organic compounds, including natural products. Functionalized acylaminodienes are key intermediates in those reactions with their ability to incorporate various heteroatoms to the Diels-Alder adducts. Taking advantage of this, various 1-acylamino-2-cyclohexene derivatives were synthesised with a novel protocol involving an in situ generation of an amidodiene intermediate. A novel one-pot procedure for the multicomponent coupling reactions of amides, aldehydes, and dienophiles using room temperature ionic liquids as reaction media and catalyst under solvent-free conditions was developed. Short reaction times, easy microwave-assisted procedure, solvent-free conditions and good conversion to product are features of this new protocol.
  • Luhtala, Juuso (2023)
    ''Don't put all your eggs in one basket'' is a common saying that applies particularly well to investing. Thus, the concept of portfolio diversification exists and is generally accepted to be a good principle. But is it always and in every situation preferable to diversify one's investments? This Master's thesis explores this question in a restricted mathematical setting. In particular, we will examine the profit-and-loss distribution of a portfolio of investments using such probability distributions that produce extreme values more frequently than some other probability distributions. The theoretical restriction we place for this thesis is that the random variables modelling the profits and losses of individual investments are assumed to be independent and identically distributed. The results of this Master's thesis are originally from Rustam Ibragimov's article Portfolio Diversification and Value at Risk Under Thick-Tailedness (2009). The main results concern two particular cases. The first main result concerns probability distributions which produce extreme values only moderately often. In the first case, we see that the accepted wisdom of portfolio diversification is proven to make sense. The second main result concerns probability distributions which can be considered to produce extreme values extremely often. In the second case, we see that the accepted wisdom of portfolio diversification is proven to increase the overall risk of the portfolio, and therefore it is preferable to not diversify one's investments in this extreme case. In this Master's thesis we will first formally introduce and define heavy-tailed probability distributions as these probability distributions that produce extreme values much more frequently than some other probability distributions. Second, we will introduce and define particular important classes of probability distributions, most of which are heavy-tailed. Third, we will give a definition of portfolio diversification by utilizing a mathematical theory that concerns how to classify how far apart or close the components of a vector are from each other. Finally, we will use all the introduced concepts and theory to answer the question is portfolio diversification always preferable. The answer is that there are extreme situations where portfolio diversification is not preferable.
  • Meriläinen, Jere (2019)
    In this thesis we cover some fundamental topics in mathematical finance and construct market models for the option pricing. An option on an asset is a contract giving the owner the right, but not the obligation, to trade the underlying asset for a fixed price at a future date. Our main goal is to find a price for an option that will not allow the existence of an arbitrage, that is, a way to make a riskless profit. We will see that the hedging has an essential role in this pricing. Both the hedging and the pricing are very import tasks for an investor trading at constantly growing derivative markets. We begin our mission by assuming that the time parameter is a discrete variable. The advantage of this approach is that we are able to jump into financial concepts with just a small quantity of prerequisites. The proper understanding of these concepts in discrete time is crucial before moving to continuous-time market models, that is, models in which the time parameter is a continuous variable. This may seem like a minor transition, but it has a significant impact on the complexity of the mathematical theory. In discrete time, we review how the existence of an equivalent martingale measure characterizes market models. If such measure exists, then market model does not contain arbitrages and the price of an option is determined by this measure via the conditional expectation. Furthermore, if the measure also unique, then all the European options (ones that can be exercised only at a predetermined time) are hedgeable in the model, that is, we can replicate the payoffs of those options with strategies constructed from other assets without adding or withdrawing capital after initial investments. In this case the market model is called complete. We also study how the hedging can be done in incomplete market models, particularly how to build risk-minimizing strategies. After that, we derive some useful tools to the problems of finding optimal exercise and hedging strategies for American options (ones that can be exercised at any moment before a fixed time) and introduce the Cox-Ross-Rubinstein binomial model to use it as a testbed for the methods we have developed so far. In continuous time, we begin by constructing stochastic integrals with respect to the Brownian motion, which is a stochastic component in our models. We then study important properties of stochastic integrals extensively. These help us comprehend dynamics of asset prices and portfolio values. In the end, we apply the tools we have developed to deal with the Black-Scholes model. Particularly, we use the Itô’s lemma and the Girsanov’s theorem to derive the Black-Scholes partial differential equation and further we exploit the Feynman-Kac formula to get the celebrated Black-Scholes formula.
  • Siilasjoki, Niila Johan (2024)
    Machine learning operations (MLOps) is an intersection paradigm between machine learning (ML), software engineering, and data engineering. It focuses on the development and operations of software engineering by providing principles, components, and workflows that form the MLOps operational support system (OSS) platform. The increasing use of ML with increasing data size and model complexity has created a challenge where the MLOps OSS platforms require cloud and high-performance computing environments to achieve flexible and efficient scalability for different workflows. Unfortunately, there are not many open-source solutions that are user-friendly or viable enough to be utilized by an MLOps OSS platform, which is why this thesis proposes a bridge solution utilized by a pipeline to address the problem. We used Design Science Methodology to define the problem, set objectives, design the implementation, demonstrate the implementation, and evaluate the solution. The resulting solutions are an environment bridge called the HTC-HPC bridge and a pipeline called the Cloud-HPC pipeline that uses it. We defined a general model for Cloud-HPC MLOps pipelines to implement the used functions in a use case suitable infrastructure ecosystem and MLOps OSS platform using open-source, provided, and self-implemented software. The demonstration and evaluation showed that the HTC-HPC bridge and Cloud-HPC pipeline provide easy setup, utilized, customizable, and scalable workflow automation, which can be used for typical ML research workflows. However, it also showed that the bridge needed improved multi-tenancy design and that the pipeline required templates for a better user experience. These aspects, alongside testing use case potential and finding real-world use cases, are part of future work.
  • Pakarinen, Piia (2013)
    Tutkimuksen tarkoituksena oli tutkia, miksi matematiikkaa pidetään poikien aineena sekä pohtia keinoja, joiden avulla tyttöjä voisi rohkaista luottamaan omaan osaamiseensa. Tyttöjen ja poikien matematiikan osaamisessa ei peruskoulussa ole juurikaan eroa, mutta poikien asenteet matematiikkaa kohtaan ovat myönteisemmät kuin tyttöjen. Pojat luottavat enemmän omaan osaamiseensa ja ovat rohkeampia soveltamaan tietojaan. Tutkimuksessa kartoitetaan yhdeksäsluokkalaisten matematiikan opiskeluun liittyviä käsityksiä ja etsitään poikien ja tyttöjen välisiä eroja. Tutkimukseen osallistui 32 hämeenlinnalaisen yläkoulun oppilasta, joista 19 oli poikia ja 13 tyttöjä. Tutkimus toteutettiin matematiikan tunnin aikana toukokuussa 2013. Tutkimusaineisto kerättiin kyselylomakkeella, joka sisälsi avoimia kysymyksiä. Kysymyksenasettelulla pyrittiin siihen, että vastaukset olisivat yksiselitteisiä ja että niihin olisi helppo vastata kirjallisesti. Tutkimusaineisto analysoitiin sekä määrällisin että laadullisin menetelmin. Tutkimuksessa suurin osa oppilaista arvioi, että tytöt ja pojat osaavat matematiikkaa yhtä hyvin. Pojat vastasivat pitävänsä matematiikasta huomattavasti useammin kuin tytöt ja tyttöjen oman matematiikan osaamisen vähättely ja vähäisempi luottamus omiin matemaattisiin taitoihin tuli esille useissa vastauksissa. Onnistumisen kokemukset ovat merkityksellisiä matematiikan opiskelussa ja niiden avulla voidaan parantaa motivaatiota sekä saada parempia tuloksia. Onnistumista koetaan esimerkiksi hyvän koenumeron myötä ja silloin, kun osataan neuvoa kaveria jonkin tehtävän ratkaisemisessa tai kaveri auttaa vaikean tehtävän kanssa.
  • Bernelius, Venla (Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2005)
    The aim of this study is to find out how urban segregation is connected to the differentiation in educational outcomes in public schools. The connection between urban structure and educational outcomes is studied on both the primary and secondary school level. The secondary purpose of this study is to find out whether the free school choice policy introduced in the mid-1990's has an effect on the educational outcomes in secondary schools or on the observed relationship between the urban structure and educational outcomes. The study is quantitative in nature, and the most important method used is statistical regression analysis. The educational outcome data ranging the years from 1999 to 2002 has been provided by the Finnish National Board of Education, and the data containing variables describing the social and physical structure of Helsinki has been provided by Statistics Finland and City of Helsinki Urban Facts. The central observation is that there is a clear connection between urban segregation and differences in educational outcomes in public schools. With variables describing urban structure, it is possible to statistically explain up to 70 % of the variation in educational outcomes in the primary schools and 60 % of the variation in educational outcomes in the secondary schools. The most significant variables in relation to low educational outcomes in Helsinki are abundance of public housing, low educational status of the adult population and high numbers of immigrants in the school's catchment area. The regression model has been constructed using these variables. The lower coefficient of determination in the educational outcomes of secondary schools is mostly due to the effects of secondary school choice. Studying the public school market revealed that students selecting a secondary school outside their local catchment area cause an increase in the variation of the educational outcomes between secondary schools. When the number of students selecting a school outside their local catchment area is taken into account in the regressional model, it is possible to explain up to 80 % of the variation in educational outcomes in the secondary schools in Helsinki.
  • Herranen, Mikko (2013)
    Partially ordered sets (posets) have various applications in computer science ranging from database systems to distributed computing. Content-based routing in publish/subscribe systems is a major poset use case. Content-based routing requires efficient poset online algorithms, including efficient insertion and deletion algorithms. We study the query and total complexities of online operations on posets and poset-like data structures. The main data structures considered are the incidence matrix, Siena poset, ChainMerge, and poset-derived forest. The contributions of this thesis are twofold: First, we present an online adaptation of the ChainMerge data structure as well as several novel poset-derived forest variants. We study the effectiveness of a first-fit-equivalent ChainMerge online insertion algorithm and show that it performs close to optimal query-wise while requiring less CPU processing in a benchmark setting. Second, we present the results of an empirical performance evaluation. In the evaluation we compare the data structures in terms of query complexity and total complexity. The results indicate ChainMerge as the best structure overall. The incidence matrix, although simple, excels in some benchmarks. Poset-derived forest is very fast overall if a 'true' poset data structure is not a requirement. Placing elements in smaller poset-derived forests and then merging them is an efficient way to construct poset-derived forests. Lazy evaluation for poset-derived forests shows some promise as well.
  • Savoranta, Ville (2017)
    Verkkoyhteisöllisyys ja välitetty viestintä ovat muodostuneet tärkeiksi työkaluiksi diasporille kuulumisen tunteen ja identiteetin neuvotteluvälineiksi, sekä yhteydenpitovälineeksi. Aiempi tutkimus on identifioinut erityisiä siteitä joita diasporisilla populaatioilla on niin alkuperäiseen kotimaahansa sekä uuteen maahan. Nämä muodostavat olennaisen osan keskusteluista diasporien verkkoyhteistöissä. Tämän tutkimuksen kohteena on Suomen somalidiaspora, joiden verkkoyhteisöllisyyttä on tutkittu vähän. Transnationaalisuuden ja diasporisuuden teorioissa sekä internet-yhteisöjen verkostoituneessa luonteessa on havaittu vahvojen tilallisen elementtien piirteitä, mitkä ovat olennaisia näiden toiminnalle. Tämän johdosta tutkimus tukeutuu tilallisuuden teoriaan metodologisen lähestymistavan kehityksessä. Tavoitteena on tutkia kuinka Suomen somalidiaspora luo, järjestyy ja ylläpitää verkkotiloja. Näille verkkoyhteisöille tyypilliset haasteet sekä usein käsitellyt aiheet ovat myös tärkeä tutkimuksen kohde. Analyysiin käytetään myös kahta muuta viitekehystä, mitkä keskittyvät transnationaalisuuden teoriaan ja diasporian positiohon. Tutkimus perustuu Suomen somalidiasporan piirissä suoritettuihin puolistrukturoituihin haastatteluihin. Haastattelut on kerätty käyttämällä niin kutsuttua lumipallo-menetelmää. Tutkimusaineisto koostuu 16 haastattelusta. Tutkimuksen päälöydös osoittaa Suomen somalidiasporan jäsenien ylläpitävän verkossa monipuolisia sitoumuksia sekä käyttävän sosiaalista media hyvin vastaavalla tavalla kuin muut ryhmät. Joitakin diasporisia erityispiirteitä voidaan kuitenkin eritellä yhteisöistä. Ensimmäisenä näistä nousee esiin kielen käyttö niin hallinnan välineenä sekä mahdollistajana. Myös kulttuurillinen toisintaminen sekä hybridisaatio nousevat merkittävään rooliin ryhmien keskusteluissa. Transnationaalisuus esiintyy ryhmissä useiden erilaisten aktiviteettien kautta, mitkä välittyvät haastateltavien sosiaalisen median käytön kuvauksista. Tutkimuksen tulokset kritisoivat oletusarvoista lähestymistapaa diasporisten erityisyyksien tutkimukseen ja tapaustutkimuksiin nojaavaa lähestymistapaa diasporien nopeasti muuttuvan verkkokulttuurin tutkimuksessa. Avoimet metodologiat kuten tämän tutkimuksen spatiaalinen lähestymistapa esitetään parempana tapana luoda edustavampi kuva näistä monipuolisista verkkokulttuureista.
  • Haara, Riikka-Mari (2016)
    Capillary electrophoresis is a great option for analyzing metabolomics compounds since the analytes are often charged. The technique is simple and cost-efficient but it is not the most popular equipment because it lacks high concentration sensitivity. Therefore, on-line concentration techniques have been developed for capillary electrophoresis. The aim of this thesis is to give an introduction to the most common on-line concentration methods in capillary electrophoresis, and to demonstrate a novel on-line concentration technique termed electroextraction. Until now, the research of on-line concentration techniques in capillary electrophoresis is mainly focused on methods based on field amplification, transient isotachophoresis, titration incorporated methods or sweeping, which are presented in the literature section. In a two-phase electroextraction, the electrodes are placed in an aqueous acceptor phase and in an organic donor phase, in which the analytes are dissolved. When the voltage is applied, the conductivity difference in the two phases cause high local field strength on organic phase leading to fast migration of the cationic analytes towards the cathode. As soon as the analytes cross the solvent interface, their migration speed decrease and they are concentrated at the phase boundary. In these experiments, a normal capillary electrophoresis analyzer was used with a hanging aqueous phase droplet at the tip of the capillary inlet. The experimental part was carried out at Leiden University, Division of Analytical BioSciences in the Netherlands. An electroextraction-capillary electrophoresis system was built for the analysis of biological acylcarnitine compounds. After the method parameters were assessed with ultraviolet detection, the method was coupled with mass spectrometric detection, and the selectivity and repeatability were briefly tested. Sensitivity was enhanced with the electroextraction procedure but the extraction factors were not satisfactory yet. Selectivity of electroextraction was discovered when the extraction of acylcarnitines was performed using different solvents. All parameters affecting the electroextraction procedure were not tested, and therefore the instability of the method was not completely understood. Thus, the method should be further investigated and optimized. In fact, all on-line concentration methods ought to be optimized for the target analytes in their existing matrix.
  • Salminen, Sari (2013)
    Rapid detection of bioactive compounds plays an important role in the phytochemical investigation of natural plant extracts.The hyphenated techniques, which couple on-line chromatographic separation and biochemical detection, are called high-resolution screening methods. In this system, high-performance liquid chromatography separates complex mixtures, and a post-column biochemical assay determines the activity of the individual compounds present in the mixtures. At the same time, parallel chemical detection techniques (e.g., diode-array detection, mass spectrometry, and nuclear magnetic resonance) identify and quantify the active compounds. In recent years, bioassays for radical scavenging (antioxidant) activity and immunoassays for antibodies have particularly been developed and applied. Assays for enzymes and receptors are limited. In the literature section of this thesis the development of on-line, post-column biochemical detection systems for the screening of bioactive compounds from complex mixtures was investigated. The interaction of drugs with proteins has gained significant importance in various areas of analytical chemistry. It can also be expected that more drugs will certainly be discovered with the development of biotechnology in the future. In the experimental section of this thesis comprehensive two-dimensional gas chromatography- time-of-flight mass spectrometry was used for screening for chemical composition of birch bark (Betula pendula). The exploitation of the mass spectra and retention index information allowed the identification of more than 600 organic compounds. Altogether, 59 phenolic compounds were identified in the inner layer of birch bark. To the best of our knowledge some of these compounds (e.g., raspberry ketone and tyrosol) have not been reported as extractives of Betula species before. The results achieved by gas chromatography mass spectrometry showed that several phenols with biological activity were present at relatively high concentrations in the sample. It was noticed that content of the compounds were dependent on the solvents with different polarities and volatilities. Phenols were extracted from birch bark using an environmental friendly pressurized hot water extraction technique. It provided good extraction efficiencies for phenolic compounds compared to those achieved with Soxhlet extraction. With pressurized hot water extraction the amount of extractable phenolic compounds approached for up to 23% of the dry weight whereas the amount was 2-5% (w/w) by Soxhlet extraction. Typical extraction time varied from 20 to 40 minutes. Most of the phenolic compounds were extracted at 180 °C for 40 minutes. Increase in the extraction temperature from 150 to 180°C resulted in an increase in the number of phenols extracted. However, enhanced temperature can accelerate hydrolysis and oxidation, so that unwanted or thermo-labile compounds can compose. Pressurized hot water extraction using water as a solvent proved to be a very promising extraction technique, and it surely has a great potential for the extraction of phenols from birch bark in the future.
  • Huttunen, Mika (2018)
    Monolithic architecture has been the standard way to architect applications for years. Monolithic applications use a single codebase which makes the deploying and development easier without adding any additional complexity as long as the size of the application stays relatively small. When the size of the codebase grows the architecture might deteriorate. This slows down the development and making it harder to on-board new developers. Microservice architecture is a novel architec- ture style that tries to solve these issues in larger codebases. Microservice architecture consists of multiple small autonomous services that are deployed and developed separately. Microservice architecture enables more fine-grained scaling and makes it possible to have faster development cycles by decreasing the amount of regression testing that is needed, because each of the services can be deployed and updated separately from each other. Microservice architecture provides also multiple new challenges that have to be solved in order to get the benefit from them. These challenges are such as the handling of distributed transactions, communication between microservices, separation of concerns in microservices and so on. On top of the technical challenges there are also organizational and operational challenges. The operational challenges are such as monitoring, logging and automated deployment of microservices. This thesis studies the differences between monolithic and microservice architecture and pinpoints the main challenges on the transition from monolithic architecture to microservice architecture. A proof of concept on how to transform a single bounded context from monolith to microservices will be made to get a better understanding of the challenges. Also a plan how to migrate tangled bounded contexts from monolith to microservices will be made in order to fully support the transition process in the future. The results from the proof of concept and the plan that was made show that the cohesion and loose coupling is more likely to stay when the bounded context is transformed to microservice.