Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Tompuri, Seppo (2014)
    Tietokonepelit on kehitetty perinteisesti joko pöytätietokoneille tai pelikonsoleille. Räjähdysmäisesti kasvaneet mobiilipelimarkkinat ovat kuitenkin haastaneet nämä pelialustat. Mobiilipelien kehitys tapahtuu usein samalla tyylillä kuin pöytätietokoneille ja pelikonsoleille, vaikka niistä löytyy pöytätietokoneista ja pelikonsoleista poikkeavaa tekniikkaa, joka mahdollistaa perinteisestä poikkeavan käyttäjäsyötteen. Nykyaikaisista mobiililaitteista löytyy muun muassa erilaisia ympäristöä tarkkailevia sensoreita sekä usein myös GPS-vastaanotin. GPS-vastaanotin tarjoaa pelin käyttöön pelaajan sijaintitiedon. Sensorit puolestaan tarkkailevat mobiililaitteen ympäristöä, kuten mobiililaitteen kiihtyvyyksiä kolmessa ulottuvuudessa. Sensoreilta saatua dataa voidaan käyttää käyttäjäsyötteessä joko suoraan tai muokattuna. Niiltä saadun datan avulla on myös mahdollista päätellä pelaajan tekemät eleet, jotka voidaan hahmontunnistuksen avulla sitoa osaksi pelin käyttäjäsyötettä. Tällainen mobiililaitteiden mahdollistama uudenlainen käyttäjäsyöte mahdollistaa uudenlaisia peligenrejä, jotka käyttävät pelaajan liikettä ja sijaintitietoa osana pelimekaniikkaa. Tämä työ esittelee ne pelimoottorin osat jotka ovat mukana GPS- ja sensoridatan keräämisessä ja käsittelyssä sekä esittelee tässä työssä suunnitellun GPS- ja sensoridataa hyödyntävän arkkitehtuurimallin mobiililaitteille. Työssä perehdytään aluksi aiheeseen liittyviin käsitteisiin sekä pelin reaaliaikaisuudesta huolehtivaan pelisilmukkaan ja sen erilaisiin arkkitehtuurimalleihin. Tämän jälkeen käydään läpi pelimoottorien suoritusaikaisesta arkkitehtuurista ne puolet, jotka liittyvät GPS- ja sensoridatan keräämiseen. Lopuksi esitellään ja arvioidaan tässä työssä suunniteltu arkkitehtuurimalli GPS- ja sensoridatan hyödyntämiselle mobiililaitteiden pelimoottoriarkkitehtuurissa. Työn ohessa tämän ehdotetun mallin toimivuus todennetaan Windows Phone 8 laitealustalle tehdyllä toteutuksella ja tämän toteutuksen lähdekoodia käytetään apuna ehdotetun mallin esittelyssä.
  • Tyrväinen, Lasse (2016)
    Learning a model over possible actions and using the learned model to maximize the obtained reward is an integral part of many applications. Trying to simultaneously learn the model by exploring state space and maximize the obtained reward using the learned model is an exploitation-exploitation tradeoff. Gaussian process upper confidence bound (GB-UCB) algorithm is an effective method for balancing between exploitation and exploration when exploring spatially dependent data in n-dimensional space. The balance between exploration and exploitation is required to limit the amount of user feedback required to achieve good prediction result in our context-based image retrieval system. The system starts with high amount of exploration and — as the confidence in the model increases — it starts exploiting the gathered information to direct the search towards better results. While the implementation of the GP-UCB is quite straightforward, it has time complexity of O(n^3) which limits its use in near real-time applications. In this thesis I present our reinforcement learning image retrieval system based on GP-UCB, with the focus on speed requirements for interactive applications. I also show simple methods to speed up the algorithm running time by doing some of the Gaussian process calculations on the GPU.
  • Jylhä-Ollila, Pekka (2020)
    K-mer counting is the process of building a histogram of all substrings of length k for an input string S. The problem itself is quite simple, but counting k-mers efficiently for a very large input string is a difficult task that has been researched extensively. In recent years the performance of k-mer counting algorithms have improved significantly, and there have been efforts to use graphics processing units (GPUs) in k-mer counting. The goal for this thesis was to design, implement and benchmark a GPU accelerated k-mer counting algorithm SNCGPU. The results showed that SNCGPU compares reasonably well to the Gerbil k-mer counting algorithm on a mid-range desktop computer, but does not utilize the resources of a high-end computing platform as efficiently. The implementation of SNCGPU is available as open-source software.
  • Cauchi, Daniel (2023)
    Alignment in genomics is the process of finding the positions where DNA strings fit best with one another, that is, where there are the least differences if they were placed side by side. This process, however, remains very computationally intensive, even with more recent algorithmic advancements in the field. Pseudoalignment is emerging as a new method over full alignment as an inexpensive alternative, both in terms of memory needed as well as in terms of power consumption. The process is to instead check for the existence of substrings within the target DNA, and this has been shown to produce good results for a lot of use cases. New methods for pseudoalignment are still evolving, and the goal of this thesis is to provide an implementation that massively parallelises the current state of the art, Themisto, by using all resources available. The most intensive parts of the pipeline are put on the GPU. Meanwhile, the components which run on the CPU are heavily parallelised. Reading and writing of the files is also done in parallel, so that parallel I/O can also be taken advantage of. Results on the Mahti supercomputer, using an NVIDIA A100, shows a 10 times end-to-end querying speedup over the best run of Themisto, using half the CPU cores as Themisto, on the dataset used in this thesis.
  • Laanti, Topi (2022)
    The research and methods in the field of computational biology have grown in the last decades, thanks to the availability of biological data. One of the applications in computational biology is genome sequencing or sequence alignment, a method to arrange sequences of, for example, DNA or RNA, to determine regions of similarity between these sequences. Sequence alignment applications include public health purposes, such as monitoring antimicrobial resistance. Demand for fast sequence alignment has led to the usage of data structures, such as the de Bruijn graph, to store a large amount of information efficiently. De Bruijn graphs are currently one of the top data structures used in indexing genome sequences, and different methods to represent them have been explored. One of these methods is the BOSS data structure, a special case of Wheeler graph index, which uses succinct data structures to represent a de Bruijn graph. As genomes can take a large amount of space, the construction of succinct de Bruijn graphs is slow. This has led to experimental research on using large-scale cluster engines such as Apache Spark and Graphic Processing Units (GPUs) in genome data processing. This thesis explores the use of Apache Spark and Spark RAPIDS, a GPU computing library for Apache Spark, in the construction of a succinct de Bruijn graph index from genome sequences. The experimental results indicate that Spark RAPIDS can provide up to 8 times speedups to specific operations, but for some other operations has severe limitations that limit its processing power in terms of succinct de Bruijn graph index construction.
  • Lankinen, Juhana (2020)
    Due to the unique properties of foams, they can be found in many different applications in a wide variety of fields. The study of foams is also useful for the many properties they share with other phenomena, like impurities in cooling metals, where the impurities coarsen similarly to bubbles in foams. For these and other reasons foams have been studied extensively for over a hundred years and continue being an interesting area of study today due to new insights in both experimental and theoretical work and new applications waiting to be used and realized in different industries. The most impactful early work in the study of the properties of foams was done in the late 1800s by Plateau. His work was extended in the early to mid-1900s by Lifshitz, Slyozov, Wagner and von Neumann and by many more authors in recent years. The early work was mostly experimental or theoretical in the sense of performing mathematical calculations on paper, while the modern methods of study have kept the experimental part -- with more refined methods of measurement of course -- but shifted towards the implementation of the theory as simulations instead of solving problems on paper. In the early 90s Durian proposed a new method for simulating the mechanics of wet foams, based on repulsive spring-like forces between neighboring bubbles. This model was later extended to allow for the coarsening of the foam, and a slightly changed version of this model has been implemented in the code presented in this thesis. As foams consist of a very large number of bubbles, it is important to be able to simulate sufficiently large systems to realistically study the physics of foams. Very large systems have traditionally been too slow to simulate on the individual bubble level in the past, but thanks to the popularity of computer games and the continuous demand for better graphics in games, the graphics processing units have become very powerful and can nowadays be used to do highly parallel general computing. In this thesis, a modified version of Durian's wet foam model that runs on the GPU is presented. The code has been implemented in modern C++ using Nvidia's CUDA on the GPU. Using this program first a typical two-dimensional foam is simulated with 100000 bubbles. It is found that the simulation code replicates the expected behaviour for this kind of foam. After this, a more detailed analysis is done of a novel phenomenon of the separation of liquid and gas phases in low gas fraction foams that arises only with sufficiently large system sizes. It is found that the phase separation causes the foam to evolve as would a foam of higher gas fraction until the phases have mixed back together. It is hypothesized that the reason causing the phase separation is related to uneven energy distribution in the foam, which itself is related to jamming and uneven distribution of the sizes of the bubbles in the foam.
  • Vasilev, Nikolay (2016)
    Vuosien 2007–2009 finanssikriisi mullisti isojen rahoituslaitosten riskipolitiikan. Henkivakuutus- ja sijoitustoimintaa harjoittavien konsernien valvontaa on tiukennettu entisestään, ja riskienhallintaan on kiinnitetty entistä paljon enemmän huomiota. Riskianalyysi on kuitenkin pitkä ja laskennallisesti raskas prosessi, johon sisältyy erilaisten riskimittojen arviointi ja mahdollisesti salkun tasapainottaminen. Value at Risk (lyh. VaR) on hyvin suoraviivainen tapa mitata sijoitussalkun kokonaisriskiä – se kertoo salkun pahimman tappion tietyllä luottamustasolla. Tappion arvioiminen voi olla kuitenkin vaikeaa, sillä sijoituspankkien ja henkivakuutusyhtiöiden valtavat salkut tietävät monimutkaisia laskelmia, jotka puolestaan vaativat suurta laskentatehoa ja vievät paljon aikaa. Tämän takia on viime aikoina yritetty tehostaa riskilaskentaan käytettäviä ohjelmia muun muassa näytönohjainten tarjoamaa massiivista rinnakkaisuutta hyödyntämällä. Tässä opinnäytetyössä tutustutaan käsitteeseen VaR sekä numeerisiin menetelmiin sen mittaamiseen, ja tutkitaan rinnakkaislaskennan NVIDIA:n CUDA-ohjelmointikehyksellä saatavia nopeutuksia. Osoitetaan, että joissakin tapauksissa suoritusaika voi pudota murto-osaan entisestään, ja tarkastellaan, kuinka tärkeä rooli tiedonsiirrolla ja muistiviittauksilla voi olla nopeutuksen kannalta. Vähentämällä tiedonsiirtoja ja tehostamalla GPU-laitteen muistinhallintaa tässä työssä saadaan 7,8x-kertainen nopeutus verrattuna tavalliseen prosessoritoteutukseen.
  • Mäkelä, Mikko (2015)
    Grafeeni ja grafeenioksidi ovat kaksiulotteisia hiilimateriaaleja. Grafeeni koostuu pääosin sp2-hybridisoituneista hiiliatomeista. Grafeenioksidissa sp2-hybridisoituneiden hiiliatomien verkkoon on kiinnittynyt happea sisältäviä funktionaalisia ryhmiä ja osa hiiliatomeista on siten sp3-hybridisoituneita. Grafeenia voidaan erotella grafiitista erilaisin menetelmin tai valmistaa atomi kerrallaan. Grafeenioksidin valmistuksessa grafiitti hapetetaan esimerkiksi kloraateilla. Sekä grafeenia että grafeenioksidia voidaan muokata kovalenttisesti tai supramolekulaarisesti. Muokkauksella tavoitellaan usein materiaalin parempaa käsiteltävyyttä tai uudenlaisen toiminnallisuuden, kuten katalyyttisen aktiivisuuden tai sensoriominaisuuksien, saavuttamista. Katalyyttisissä prosesseissa muokkaamattoman grafeenin ja grafiitin sovelluksia ei ole monia. Sen sijaan grafeenioksidia on käytetty happokatalyytin tavoin sekä hapettimena monissa kemiallisissa prosesseissa, kuten alkyynien hydrauksissa, bentsyylialkoholien hapetuksessa aldehydiksi ja amiinien hapettavassa kytkennässä. Aktiivihiili on kolmiulotteinen huokoinen grafiitin kaltainen materiaali. Siitä on mahdollista valmistaa eri tavoin hapettuneita aktiivihiiliä typpihapon tai kuningasveden avulla. Hapetusolosuhteet vaikuttavat paljon syntyvän tuotteen funktionaalisten ryhmien jakaumaan. Eri tavoin valmistetuilla aktiivihiilillä on myös toisistaan poikkeava katalyyttinen aktiivisuus 2-aryyli-indolien homokytkentäreaktiossa.
  • Tantarimäki, Mika (2015)
    Grafiikkaprosessorin avustama järjestäminen on hyödyllistä tilanteissa, joissa keskusyksikkö ei pysty järjestämään syötettä riittävän nopeasti, tai jos syöte on jo valmiiksi grafiikkaprosessorin muistissa muun laskennan yhteydessä. Tässä työssä käydään läpi pikajärjestämisen, kantalukujärjestämisen ja lomitusjärjestämisen peräkkäin toimivat algoritmit ja selitetään uusimpien tutkimusten perusteella, kuinka niitä sovelletaan grafiikkaprosessorille rinnakkain suoritettavaksi. Lomitusvaihtojärjestäminen toteutettiin CUDA-alustalle ja sen suorituskykyä verrattiin Thrust-kirjaston lomitus- ja kantalukujärjestämistoteutuksiin. Mittausten mukaan lomitusvaihtojärjestäminen on keskusyksikössä toimivaa pikajärjestämistä nopeampi, mutta se ei pärjää suorituskyvyssä kirjaston toteutuksiin, kun syötteen alkioiden lukumäärä kasvatetaan. Lisäksi mitattiin, miten syötteen alkioiden koon muuttaminen vaikuttaa mainittujen kolmen toteutuksen järjestämisnopeuksiin. Kokeiden mukaan kantalukujärjestäminen on nopein, kun alkiot ovat muutaman tavun kokoisia, mutta alkioiden koon kasvaessa lomitusjärjestäminen menee suorituskyvyssä edelle.
  • Lavikka, Kari (2020)
    Visualization is an indispensable method in the exploration of genomic data. However, the current state of the art in genome browsers – a class of interactive visualization tools – limit the exploration by coupling the visual representations with specific file formats. Because the tools do not support the exploration of the visualization design space, they are difficult to adapt to atypical data. Moreover, although the tools provide interactivity, the implementations are often rudimentary, encumbering the exploration of the data. This thesis introduces GenomeSpy, an interactive genome visualization tool that improves upon the current state of the art by providing better support for exploration. The tool uses a visualization grammar that allows for implementing novel visualization designs, which can display the underlying data more effectively. Moreover, the tool implements GPU-accelerated interactions that better support navigation in the genomic space. For instance, smoothly animated transitions between loci or sample sets improve the perception of causality and help the users stay in the flow of exploration. The expressivity of the visualization grammar and the benefit of fluid interactions are validated with two case studies. The case studies demonstrate visualization of high-grade serous ovarian cancer data at different analysis phases. First, GenomeSpy is being used to create a tool for scrutinizing raw copy-number variation data along with segmentation results. Second, the segmentations along with point mutations are used in a GenomeSpy-based multi-sample visualization that allows for exploring and comparing both multiple data dimensions and samples at the same time. Although the focus has been on cancer research, the tool could be applied to other domains as well.
  • Halin, Mikko (2019)
    Graphs are an intuitive way to model connections between data and they have been used in problem solving since the 18th century. In modern applications graphs are used, e.g., in social network services, e-commerce sites and navigation systems. This thesis presents a graph-based approach for handling data and observing identities from network traffic.
  • Sainio, Rita Anniina (2023)
    Node classification is an important problem on networks in many different contexts. Optimizing the graph embedding has great potential to help improve the classification accuracy. The purpose of this thesis is to explore how graph embeddings can be exploited in the node classification task in the context of citation networks. More specifically, this thesis looks into the impact of different kinds of embeddings on the node classification, comparing their performance. Using three different similarity functions and different dimensions for the embedding vector ranging from 1 to 800, we examined the impact of graph embeddings on accuracy in node classification using three benchmark datasets: Cora, Citeseer, and PubMed. Our experimental results indicate that there are some common tendencies in the way dimensionality impacts the graph embedding quality regardless of the graph. We also established that some network-specific hyperparameter tuning clearly affects classification accuracy.
  • Karvinen, Mikael (2022)
    Tässä tutkielmassa tarkastellaan horisontaalisten gravitaatiovaihteluiden vaikutusta ilmakehän perusyhtälöihin sekä yksinkertaisen ilmakehämallin tuloksiin erilaisissa simulaatioissa. Työn motivointina oli tutkia putoamiskiihtyvyyden vaikutusta mallinnustarkkuuteen, koska se on yksi monista säänennustus- ja ilmastosimulaatioihin liittyvistä epätarkkuustekijöistä. Ilmakehän perusyhtälöt johdettiin aluksi uudelleen huomioimalla gravitaation vaihtelu vaakasuunnassa. Tämän jälkeen vastaavat yhtälömuutokset tehtiin SPEEDY-mallin lähdekoodiin, ja mallin avulla tehtiin simulaatioita gravitaatiovaihteluiden vaikutusten selvittämiseksi. Jotta tulosten analysointi olisi mahdollisimman helppoa, käytettiin simulaatioissa paljon yksinkertaistuksia. Näistä merkittävin oli mallimaapallon korvaaminen vesiplaneetalla. Yhtälömuutosten oikeellisuus mallissa verifioitiin yhden aika-askeleen kokeilla, minkä jälkeen muokatuille perusyhtälöille tehtiin suuruusluokka-analyysi. Analyysin perusteella gravitaatiovaihteluista aiheutuvat lisätermit olivat pääosin yhdestä kahteen kertaluokkaa yhtälöiden muita termejä pienempiä. Lopuksi tehtiin kymmenen vuoden simulaatioita, joissa tarkasteltiin niin sanotun normaaligravitaatiojakauman vaikutuksia mallin tuloksiin. Näissä kokeissa havaittiin, että meteorologisten suureiden anomaliat olivat pääosin maltillisia, mutta eivät merkityksettömän pieniä. Esimerkiksi tuulikentässä havaitut muutokset olivat suurimmillaan noin 2 m/s, kun taas lämpötila-anomaliat jäivät globaalisti alle puoleen asteeseen. Meridionaalisen kiertoliikkeen anomaliassa havaittiin puolestaan selkeä antisymmetria pallonpuoliskojen välillä: intertrooppinen konvergenssivyöhyke siirtyi päiväntasaajalta leveyspiirin 10°S tienoille, kun taas leveyspiirillä 5°N nousuliike heikkeni. Lisäksi länsituulet hidastuivat pohjoisen pallonpuoliskon keskileveysasteilla, mutta voimistuivat eteläisellä pallonpuoliskolla. Tulosten perusteella aiheen tutkimista kannattaa jatkaa myös tulevaisuudessa.
  • Nurmivaara, Sami (2023)
    Introduction: The issue of climate change has emerged as a global challenge in response to the increasing consumption of natural resources. As the Information Technology (IT) sector has undergone significant growth in recent years, the implementation of environmentally sustainable practices which lower the environmental impact of software, such as electricity usage, has become imperative. The concept of green in software engineering seeks to address these challenges in the software engineering process. Methods: As the goal is to explore and evaluate different approaches to environmental sustainability in green in software engineering whilst also taking a look into the maturity and evidence level of research about the subject, this study adopts a systematic literature review approach. The search strings, search process and other relevant information are meticulously documented and explored in each step of the research process. Results: Green in software engineering has been identified as a promising field of research, but the absence of agreed-upon definitions and terminology often leads to research efforts replicating the previous studies without a clear reason as to why. The goal of increasing environmental sustainability is commonly agreed on in software engineering, but the concrete steps to achieve it are currently missing. Building a strong body of knowledge, common measurements and tooling to support them and increasing the knowledge about sustainability in the field of software engineering should all be taken into account in an effort to reach the environmental sustainability goals of tomorrow.
  • Mehtälä, Harri Eerik Jalmari (2023)
    Background: The production, operation and use of information technology (IT) have a significant impact on the environment. As an example, the estimated footprint of global greenhouse gas emissions of the IT industry, including the production, operation and maintenance of main consumer devices, data centres and communication networks, doubled between 2007 (1–1.6%) and 2016 (2.5–3.1%). The European Union regulates the energy efficiency of data centre hardware. However, there is still a lack of regulation and guidance regarding the environmental impacts of software use, i.e. impacts from the production, operation and disposal of hardware devices required for using software. Aims: The goal of this thesis is to provide actionable knowledge which could be used by software practitioners aiming to reduce the environmental impacts of software use. Method: We conducted a systematic literature review of academic literature where we assessed evidence of the effectiveness of tools, methods and practices for reducing the environmental impacts of software use. The review covers 20 papers. Results: 60% of studied papers focus on reducing the energy consumption of software that is executed on a single local hardware device, which excludes networked software. The results contain 6 tools, 25 methods and 11 practices. Program code optimisation can potentially reduce the energy consumption of software use by 2–62%. Shifting the execution time of time-flexible data centre workloads towards times when the electric grid has plenty of renewable electricity can potentially reduce data centre CO2 emissions by 33.7%. Conclusions: The results suggest that the energy consumption of software use has received much attention in research. We suggest more research to be done on environmental impacts other than energy consumption, such as CO2 emissions, software-induced hardware obsolescence, electronic waste and freshwater consumption. Practitioners should also take into account the potential impacts of data transmission networks and remote hardware, such as data centres, in addition to local hardware.
  • Toivanen, Elias Akseli (2014)
    Two-electron integrals, which arise in the quantum mechanical description of electron-electron repulsion, are needed in electronic structure calculations. In this thesis, a fully numerical scheme for computing them has been developed and implemented. The accuracy and performance of the scheme is also demonstrated with proof-of-concept calculations. The work in this thesis is a part of the ongoing efforts aiming at a fully numerical electronic structure code for massively parallel computer architectures. The power of these emerging computational resources can be seized only if all computational tasks are divided to small and independent parts that are then processed concurrently. Such a divide and conquer approach is indeed the main characteristic of the present integration scheme. The scheme is a variant of the Fast Multipole Method (FMM) that is an algorithm originally designed for rapid evaluation of electrostatic and gravitational potential fields in point particle simulations. Since the two-electron integrals can be formulated as a problem in electrostatics involving electrostatic potentials and continuous charge densities, the FMM algorithm is also applicable for tackling them. The basic idea in the present scheme is to decompose the computational domain to sub-domains in which the electron densities and electrostatic potentials are further decomposed to finite element functions. The two-electron integrals are then computed as a sum of contributions from each sub-domain. As the current scheme performs all integrals on real-space grids, it has been titled as the Grid-based Fast Multipole Method (GB-FMM). Its computational cost scales linearly with respect to the number of sub-domains. The thesis consists of two parts – a literature review discussing the key features of electronic structure calculations at the Hartree-Fock level of theory and a documentation of the GB-FMM. The results of the proof-of-concept calculations are encouraging. The GB-FMM scheme can achieve parts per billion accuracy. In addition, an analysis of its performance in a single-core environment indicates that the computational cost of the GB-FMM scheme has a rather big prefactor but a favorable scaling with respect to system size. However, as the GB-FMM algorithm has been designed with parallel execution in mind, its full power is predicted to become evident only when massively parallel computers will become commonplace.
  • Salminen, Kalle (2017)
    Tässä työssä syvennytään metristen avaruuksien erilaisuuden vertailuun määrittelemällä niin sanottu Gromov–Hausdorff -etäisyys, eli metristen avaruuksien välinen etäisyyskuvaus, jonka osoitetaan toteuttavan metriikan ehdot jokaisessa joukossa metristen avaruuksien ekvivalenssiluokkia. Työssä todistetaan, että metristen avaruuksien välinen Gromov–Hausdorff -etäisyys on nolla, jos ja vain jos avaruudet ovat isometrisia. Työn päätuloksena todistetaan, että jonolla tasaisesti kompakteja metrisiä avaruuksia on osajono, joka suppenee kompaktien metristen avaruksien kokoelmassa Gromov–Hausdorff -metriikalla. Tutkielman edetessä todistetaan muita yleisiä, tutkielmassa hyödynnettäviä matemaattisia tuloksia. Näistä mainittakoon Heinen ja Borelin lause, jonka mukaan metrinen avaruus on kompakti, jos ja vain jos se on täysin rajoittunut ja täydellinen. Todistus pohjautuu metrisiin avaruuksiin pätevään jonokompaktiuden määritelmään. Lisäksi todistetaan, että jos f on tasaisesti jatkuva kuvaus metrisen avaruuden (X ,d_X) tiheältä osajoukolta A täydelliselle metriselle avaruudelle (Y, d_Y), niin on olemassa sellainen tasaisesti jatkuva kuvaus g : \overline{A} → Y, että g on kuvauksen f laajennus. Työn kannalta yksi merkittävimmistä välituloksista koskee metrisen täydellistämistä, jonka mukaan jokaisella metrisellä avaruudella (X, d) on olemassa sellainen täydellinen metrinen avaruus (Y, d^*) ja sellainen isometrinen kuvaus \varphi : X → Y, että \varphi(X) on tiheä avaruudessa Y.
  • Pellikka, Hilkka (Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2011)
    Sea level rise is among the most worrying consequences of climate change, and the biggest uncertainty of sea level predictions lies in the future behaviour of the ice sheets of Greenland and Antarctica. In this work, a literature review is made concerning the future of the Greenland ice sheet and the effect of its melting on Baltic Sea level. The relation between sea level and ice sheets is also considered more generally from a theoretical and historical point of view. Lately, surprisingly rapid changes in the amount of ice discharging into the sea have been observed along the coastal areas of the ice sheets, and the mass deficit of Greenland and West Antarctic ice sheets which are considered vulnerable to warming has been increasing from the 1990s. The changes are probably related to atmospheric or oceanic temperature variations which affect the flow speed of ice either via meltwater penetrating to the bottom of the ice sheet or via changes in the flow resistance generated by the floating parts of an ice stream. These phenomena are assumed to increase the mass deficit of the ice sheets in the warming climate; however, there is no comprehensive theory to explain and model them. Thus, it is not yet possible to make reliable predictions of the ice sheet contribution to sea level rise. On the grounds of the historical evidence it appears that sea level can rise rather rapidly, 1-2 metres per century, even during warm climate periods. Sea level rise projections of similar magnitude have been made with so-called semiempirical methods that are based on modelling the link between sea level and global mean temperature. Such a rapid rise would require considerable acceleration of the ice sheet flow. Stronger rise appears rather unlikely, among other things because the mountainous coastline restricts ice discharge from Greenland. The upper limit of sea level rise from Greenland alone has been estimated at half a metre by the end of this century. Due to changes in the Earth s gravity field, the sea level rise caused by melting ice is not spatially uniform. Near the melting ice sheet the sea level rise is considerably smaller than the global average, whereas farther away it is slightly greater than the average. Because of this phenomenon, the effect of the Greenland ice sheet on Baltic Sea level will probably be rather small during this century, 15 cm at most. Melting of the Antarctic ice sheet is clearly more dangerous for the Baltic Sea, but also very uncertain. It is likely that the sea level predictions will become more accurate in the near future as the ice sheet models develop.
  • Laakso, Jyri (2020)
    Subsurface sediments were investigated by radar acquisition campaigns and sedimentological investigations in Kersilö area, Sodankylä, central Finnish Lapland, in order to provide information about the sedimentology and stratigraphy of the area, and to construct the succession of events related to the glacial and postglacial development of the subsurface sediments. The study area covers an about 150 km2 area around Kitinen river. The subsurface is controlled by unconsolidated coarse-grained sediments with a mean grain-size ranging from sand to gravel. Typical thickness of the overburden varies from 5 to 15 metres, exceeding 20 metres in places. Eastern part of the study area is covered by Holocene peat of Viiankiaapa mire, underlain by clastic sorted sediments and till. Eastern river bank is characterized by sorted sediments interpreted to represent an ancient braided-river environment. Western side of the river presents extensive sorted sediment deposits, interpreted to represent extramarginal-outwash and braided-river sediments. Till beds are more dominant in the western side of the river. Stratigraphy of the Kärväsniemi test site comprises three sandy till beds, estimated to represent Early, Middle and Late Weichselian glaciations. The till units are interbedded by more sorted fluvial sediments, estimated to have Early and Middle Weichselian and Holocene origin. Absolute age determinations from the middle sorted sediment assemblage suggests Odderade Interstadial between the Early and Middle Weichselian glaciations. Ground penetrating radar, utilising 50 MHz and 100 MHz antennas, proves its suitability for investigation of fluvial deposits of a proglacial environment, with abundant coarse-grained sediments. Quality of the data enables identification of lithological interfaces within and between sediment units. Seven radar facies and facies associations are identified, and classified as organic, glacial and fluvial sediments. Fluvial sediments include five radar facies and facies associations characteristic of fluvial deposits. The sediments indicate a succession where glacial deposits alternate with fluvial sorted sediments indicating ice-free events. Fluvial activity is estimated to have been repetitious and especially intensive during the last deglaciation, possibly causing partial erosion of the till beds. Formation of organic peat started in the area after the final retreat of the Scandinavian Ice Sheet. Clastic surface sediments indicating deglacial to Holocene origin have experienced partial reworking by wind and floodwaters.
  • Kittilä, Anniina (2015)
    Bedrock fracturing is considerably extensive and distinct in Finland, and the fractures that are open, conductive and interconnected usually control the groundwater flow paths in fractured bedrock. This highlights the importance of knowing the locations and hydraulic connections of water conducting fracture zones particularly in mining areas, because they can transport adverse substances outside the mining area. In this study, it is focused on examining possible hydraulic connections of bedrock groundwater by using the stable isotopes of oxygen (δ18O) and hydrogen (δ2H). The study was carried out in the Talvivaara mining area in Northeastern Finland alongside a project from the Geological Survey of Finland (GTK). After November 2012, when a leakage of acidic, metal-containing waste water occurred in the gypsum ponds, there was an urgent need to study the groundwater transport routes in the bedrock fractures. The aim was to find hydraulic connections between surface water and groundwater, and to study the flow of the groundwater in the fracture zones based on the different isotopic characteristics of waters from different sources and isotopic similarities. Most of the materials used in this study were obtained from the results of the project from the GTK. These materials included geophysical interpretations of the locations and water content of the main fracture zones and the results from the geochemical analyzes. Together with the interpretations of groundwater flow direction based on hydraulic heads these materials formed a frame for this study. The isotope composition of 39 water samples from bedrock wells, shallow wells and surface water was analyzed using cavity ring-down spectroscopy (CRDS) method. The surface waters were clearly distinguished based on their evident evaporation signal, but no significant such a signal was observed in the bedrock and shallow groundwaters. However, similarities between groundwater from different depths of same well were found, in addition to similarities between different wells along same fracture zones. Although the isotopes did not indicate surface water contamination, groundwater contamination with smaller amounts of water is possible, in which case the changes in isotope composition are not yet significant, while certain elements have elevated concentrations. A NE-SW oriented fracture zone passing in the center of the study area was concluded to have the most important role in collecting and transporting groundwater outside the mining area. More detailed interpretations would require regular sampling for a longer period of time to better distinguish naturally and artificially induced changes both in the isotopic but also geochemical compositions. Also the usage of packer tests possibly together with pumping tests would be useful in obtaining more comprehensive image of the groundwater flow in the fracture zones and their hydraulic connections.