Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Elf, Charlotte (2013)
    The purpose of this study was to examine what kind of geomorphologic formations can be found in Hanko, Raasepori, Sipoo, Porvoo and Loviisa districts and which of them can be used for teaching purposes. Questions for the study were: 1) What interesting geomorphologic formations can be found in the areas?, 2) In what way could the formations be used by the geography- and biology teachers in the areas in their geography teaching?, 3) Which of the formations will only be thought through schoolbooks because they can't be found in the nearby grounds? The geomorphologic formations taken into account are the following: a) erractic block, b) roche moutonnée, c) drumlin, d) esker, e) terminal moraine, f) kettle, g) pothole. Besides these formations some nationally valuable formations are described as well. This study was done because other studies have shown that outdoor activities are seldom a part of the curriculum. Other studies showed that the reasons for a minor role of outdoor activities are that (the) teachers feel unsure of themselves about the subject, lack of time and too big groups of students. Studies have also shown that field trips stimulate deep learning by presenting a environment where the student can connect the learnt subject with reality. Field teaching isn't only a benefit when it comes to knowledge but also for the students self-confidence and motivation towards the subject. This is because the affective domain in the brain also influences on the deep learning by raising the motivation level and making the students believe in their own capacity. Map- and literature analysis where made to map to which places are worth visiting. Based on the analysis the field investigations were done at the chosen places and the formations were documented with camera and with GPS. Soilsamples were taken from three places for drysieving in the sedimentlaboratory in Kumpula. The morphoscopical features of these glaciofluvial soil samples were also analyzed. The results of the analysis and the field investigations showed that most of the geomorphologic formations that are describe in the schoolbooks for geography in high school can be found in the districts. Because of this there are possibilities for field trips as a part of the curriculum and there by a chance for a more deep understanding of the Finnish physical landscape and the geomorphologic formations by the students. In addition to the knowledge about the formations and the landscape the field trips also enables the student to get out of the classroom and explore the nature with their own senses instead of through a school book. The students' social skills also have a chance to develop as well as the use of working methods in science subjects.
  • Hirvi, Outi (2015)
    The objective of the research is to study how the geopolitical situation during and after the Cold War can be distinguished in the national security strategies and discourses of the United States of America. The end of the Cold War has been understood as a turning point in the study of geopolitics. This represent a transition to an era where the influence and meaning of individual states has diminished, and the world has moved into the era of globalization. The objective of the study is to determine how these changes can be detected in the foreign policy of the United States and the geopolitical discourses. Other specific research questions include how the United States perceives itself in world politics during and after the Cold War and how the difference is visible in the comparison of the national security documents and the public speeches of the presidents. The data that has been chosen for the study represents the National Security Strategies of the United States from the years 1950, 1987, 2000 and 2010. These documents are compared to the State of the Union Addresses from the corresponding years. The research method that is used is critical discourse analysis, which is used with traditional discourse analysis to reveal the power relations behind the discourses. Tabulation is used in the quantitative analysis to study the effects of globalization in the vocabulary. The changes in the discourses and highlighting different values indicate that alternating political goals transform the discourses that are used. The effect of globalization is detected in the altered way of speaking about enemies and allies, about the organizations and alliances that states form and about the economy. The change in the position of the President is present in the documents especially through discourses and argumentation strategies. Compared to traditional and critical geopolitical studies, the documents indicate that the research tradition should place more emphasis on the economy and how it has affected international relations and international security. The significance of economy as a pacifying element is highlighted in the results of the document. The documents indicate that the United States has strived to promote capitalism from the beginning of the Cold War which has created an interdependent political system, which makes conflicts between these states less likely. These changes are visible already before the end of the Cold War, and therefore this specific event does not represent a dramatic shift in the discourses.
  • Meri, Maija (2020)
    This study addresses the ways in which environmental challenges and power relations are manifested through tourism in the easternmost province of Panama, Darién. Historically, the area of Darién has remained relatively isolated from the rest of the country and tourism in the area is of small-scale. However, the high biodiversity and natural resources have drawn increasing attention, thus resulting in tensions and competing interests between different stakeholders. Local perceptions of tourism bring insight about how people make sense of and engage with touristic activities, and how geopolitical and ecological discourses contribute to environmental inequalities. The theoretical background draws from geopolitical ecology, which states the role and impact of the environment in the shaping of political space and power relations. The research is based on 37 thematic interviews and participant observation carried out during a one-month ethnographic fieldwork in Darién. The findings indicate that tourism has contributed to exposing the environmental challenges in Darién, but also caused controversy over the use of resources for tourism. Tourism brings forward power relations and demonstrates that different players are in an uneven position. The results show that tourism in Darién has been influenced by its remoteness and the nowadays largely misleading assumption of its unstable security situation. Darién faces a broad range of environmental problems, resulting mainly from the State´s weak presence and poor environmental policies. However, tourism has been locally able to enhance environmental awareness and interest towards conservation. Different tourism actors have unequal possibilities in making use of natural resources depending largely on their wealth and social networks. Further geopolitical interests appear through territorial issues and questions concerning land ownership. The findings indicate that by looking at tourism, many underlying tensions related to existing social inequalities, power relations and distribution of ecological benefits can be revealed.
  • Hurri, Karoliina (2016)
    China's identity in climate politics can be argued to be in a dilemma of being a responsible leader or a developing country that still requires time for its emissions to peak. In 2015 in COP21 in Paris, China was negotiating with the BASIC countries and bilaterally with the US. The objective of research was to recognize China's geopolitical identity in climate politics in the BASIC and US-China frames and to discuss the possible similarities and differences. The hypothesis was that China identifies itself geopolitically differently in the two frames. The analysis was conducted on the basis of two questions in geopolitical identity: who is China and where is China? China's National Development and Reform Commission (NDRC), as the most important body of China's climate politics, has published news releases of the meetings with the BASIC countries and the United States. These documents were analyzed with a critical discourse analysis frame. Discourses of who is China -question were discussed under four themes emerged from the data: climate change, the principle of CBDRC, leadership and Paris Agreement. Where is China -question was considered based on places that most frequently appeared in the documents: developing countries, the US-China coalition, BASIC countries, developed countries, Convention, Parties, Climate Change Working Group, Green Climate Fund, G-77 and China, and Annex B -countries. The results were applied by evaluating the BASIC and the US-China frames as discourse-practice regimes, recognizing the climate change framings of the these two and then, suggesting a geopolitical climate mapping of the frames. The conclusion confronted the hypothesis as China tolerably considered itself as a representative of developing countries in both frames, while instead the discourse of climate change was different between the two. Thus, China is not negotiating in the two frames because of different identities but has distinguished goals for them. The worldviews of the two frames are different. The BASIC one is strongly based on confronting the developed nations and building on the dichotomy. The worldview of the US-China frame is a more postmodern one and thus, questions the 'norm' of being a developing country in international climate politics. The BASIC frame is a one-question coalition, whereas the US-China frame includes more diverse cooperation and is slightly closer to China's own climate policy like its INDC document.
  • Steenari, Jussi (2023)
    Ship traffic is a major source of global greenhouse gas emissions, and the pressure on the maritime industry to lower its carbon footprint is constantly growing. One easy way for ships to lower their emissions would be to lower their sailing speed. The global ship traffic has for ages followed a practice called "sail fast, then wait", which means that ships try to reach their destination in the fastest possible time regardless and then wait at an anchorage near the harbor for a mooring place to become available. This method is easy to execute logistically, but it does not optimize the sailing speeds to take into account the emissions. An alternative tactic would be to calculate traffic patterns at the destination and use this information to plan the voyage so that the time at anchorage is minimized. This would allow ships to sail at lower speeds without compromising the total length of the journey. To create a model to schedule arrivals at ports, traffic patterns need to be formed on how ships interact with port infrastructure. However, port infrastructure is not widely available in an easy-to-use form. This makes it difficult to develop models that are capable of predicting traffic patterns. However, ship voyage information is readily available from commercial Automatic Information System (AIS) data. In this thesis, I present a novel implementation, which extracts information on the port infrastructure from AIS data using the DBSCAN clustering algorithm. In addition to clustering the AIS data, the implementation presented in this thesis uses a novel optimization method to search for optimal hyperparameters for the DBSCAN algorithm. The optimization process evaluates possible solutions using cluster validity indices (CVI), which are metrics that represent the goodness of clustering. A comparison with different CVIs is done to narrow down the most effective way to cluster AIS data to find information on port infrastructure.
  • Juvonen, Katriina (2021)
    Etelä-Pohjanmaalla Kurikassa maaperän rakennettavuusominaisuudet ovat hankalat maaperän mittavien hienoaineskerrostumien vuoksi. Näiden kerrostumien alueellista jakautumista ei tiedetä, joten tämän työn tarkoituksena on toteuttaa maaperästä geoteknisiin pohjatutkimuksiin perustuva alueellinen 3D-malli Kurikan kaupungin rakennuslupaa-arkiston pohjatutkimusaineistoja käyttäen. Pohjatutkimukset koostuvat yksittäisiin rakennuskohteisiin toteutetuista maaperäkairauksista. Aineistona käytettäviä tutkimuskohteita on yhteensä kymmenen ja kairauksia 3D-mallissa on käytössä 112. Pohjatutkimusaineistoista digitoitiin kohde ja kairaus kerrallaan maalajitieto sekä kairauksen kuormitusominaisuudet. Digitointiin käytettiin suomalaisen 3D-Systemsin 3D-Win –ohjelmistoa. 3D-malli toteutettiin Leapfrog Geo® -ohjelmistolla. Mallin ensimmäisessä vaiheessa maaperä jaettiin kahdeksaan yksikköön. Lopullisessa yksinkertaistetussa mallissa maaperän hienoainesyksiköt yhdistettiin yhdeksi yksiköksi. Saatuja malleja verrattiin alueelta olevaan tyyppileikkaukseen. Malleilla arvioidaan millä aineiston resoluutiolla saadaan mallille paras ennustusvoima. 3D-mallien toteutus onnistui. Mallin mukaan alueella on kahdeksan maalajiyksikköä (vanhimmasta nuorimpaan): moreenit, hiekat, siltit, savet, liejusavet, siltit 2, savet 2 sekä pinnassa humus- ja täyttömaat. Useilla mallin yksiköistä (siltit, savet ja liejusavet) on samankaltaiset geotekniset ominaisuudet, joten ne yhdistettiin yhdeksi hienoainesyksiköksi. Yksinkertaistetun mallin yksiköitä verrattiin alueelta olevaan tyyppileikkaukseen. Tyyppileikkauksen sedimenttisarja ja mallin hienoainesyksikkö vastaavat hyvin toisiaan. Tämän työn tulosten perusteella geoteknisiin pohjatutkimuksiin perustuvassa maaperän 3D-mallinnuksessa hienoainesyksiköiden osalta yksinkertaistettu malli on ennustettavuuden kannalta luotettavampi kuin kompleksinen malli.
  • Ye, Yina (2015)
    Gesture recognition is a fundamental technology for understanding human behavior and providing natural human-computer interaction, of which the need has been boosted by the popularity of ubiquitous systems and applications. Currently, there are mainly two classes of gesture recognition approaches: parametric methods and the template matching methods. Although both approaches provide accurate recognition performance, the former is usually highly constrained in time and resources, while the latter is limited within a small range of gesture types that are simple and distinct from each other. The objective of the thesis is to present a novel stroke-based gesture recognition solution, Gestimator, that recognizes dynamic hand gestures with high accuracy, run-time efficiency, and ease for customization, considering a wide range of complexity, ambiguity, and difficulty for user to perform the gesture. A stroke-wise elastic gesture matching framework alongside an adaptive sequence segmentation technique were developed for improving recognition accuracy. We conducted extensive evaluations using three datasets that included pen-based command gestures, character gestures, and mid-air gestures collected from a user authentication experiment. Results from extensive benchmark evaluations prove that Gestimator achieves higher overall accuracy, compared with three state-of-the-art gesture recognizers, dealing with both touch-screen based gestures (98:9%) and spatial gestures (96:61%). Results also show that Gestimator outperforms baseline methods on recognizing ambiguous gestures.
  • Alizadeh, Amin (2014)
    Gesture recognition is a hard task due to the presence of noise resulted from the unpredictability and ambiguity of the body motions. The joints' locations vary on all the axes which could add extra noise to the task of gesture recognition. Extra noise is added to the task as the target group of the research are small kids. On the other hand multiple gestures and similar features of some of them make the recognition task even harder, therefore multiple recognitions for different joints is needed to be done in parallel. Hidden Markov Models based techniques and the concept of threshold model are used to recognize the gesture motions from non-gesture motions. Firstly series of gestures are recorded and used to create the models. K-Means algorithm is used to cluster the points into the N states and labels the 3D points. Then the available alphabet of output symbols is expanded to M (M > N) states as it is not sure if the sequence of the points are a gesture or not. Next, by looking at the sequence of the labeled data it is possible to estimate how likely it is that the points have passed through the sequence the N states. Finally, if the likelihood is above the threshold a gesture is recognized.
  • Paakkola, Kalle (2024)
    SQLite has been called the most widely deployed database system but its use in web services has been somewhat limited compared to client/server database engines. Today due to the continued development of SQLite it has the needed features to be a serious option for certain kinds of web services. SQLite is also the technology behind many emerging globally distributed database technologies. In a case study, an existing web application that is backed by centralized SQLite is evaluated in the context of what trade-offs would have to be made when switching to a globally distributed database. This is done by benchmarking the difference in latency users experience depending on their geographical location. Outside of that known challenges in the context of distributed computing as well as challenges specific to migrating from a centralized embedded database to a globally distributed one are evaluated as well. In the results, we found out that there are latency improvements to be made with the globally distributed approach. That said optimizing application code is likely to be the most effective latency improvement for many projects. In addition, especially the increased complexity of running a distributed system compared to a centralized one was in our estimations a big rea- son why the application being studied ultimately decided not to migrate towards a globally distributed deployment. Our findings relate primarily to this one application and other applications with different cir- cumstances could come to a different answer. These technologies are still rapidly advancing so it is likely we will see continued development and properties of these globally distributed database technologies evolving.
  • Vilhonen, Essi (2023)
    Many extensions to the Standard Model of particle physics feature a first-order phase transition in the very early universe. This kind of a phase transition would source gravitational waves through the collision of nucleation bubbles. These in turn could be detected e.g. with the future space-based gravitational wave observatory LISA (Laser Interferometer Space Antenna). Cosmic strings, on the other hand, are line-like topological defects. In this work, we focus on global strings arising from the spontaneous breakdown of a global symmetry. One example of global strings are axionic strings, which are a popular research topic, owing to the role of the axion as a potential dark matter candidate and a solution to the strong CP problem. In this work, our aim is to combine these two sets of early-universe phenomena. We investigate the possibility of creating global strings through the bubble collisions of a first-order phase transition. We use a simplified model with a two-component scalar field to nucleate the bubbles and simulate their expansion, obtaining a short-lived network of global strings in the process. We present results for string lifetime, mean string separations corresponding to different mean bubble separations, and gravitational wave spectra.
  • Blomqvist, Niclas (2016)
    The topography of the Earth's surface is the result of the interaction of tectonics, erosion and climate. Thus, topography should contain a record of these processes that can be extracted by topographic analysis. The question considered in this study is whether the spatial variations in erosion that have sculpted the modern topography are representative of the long-term erosion rates in mountainous regions. We compare long-term erosion rates derived from low-temperature thermochronometry to erosional proxies calculated from topographic and climatic data analysis. The study has been performed on a global scale including six orogens: The Himalaya, Andes, Taiwan, Olympic Mountains, Southern Alps in New Zealand and European Alps. The data was analyzed using a new swath profile analysis tool for ArcGIS called ArcSwath to determine the correlations between the long-term erosion rates and modern elevations, slope angles, relief in 2.5-km- and 5-km-diameter circles, erosion potential, normalized channel steepness index ksn, and annual rainfall. ArcSwath uses a Python script that has been incorporated into an ArcMap 10.2 add-in tool, extracting swath profiles in about ten seconds compared to earlier workflows that could take more than an hour. Swath profile analysis is a relatively common method for geomorphological research. A swath profile is a rectangular extraction of a digital model to a cross-section, where statistical parameters (minimum, mean and maximum) are presented along the profile length. In previous studies swath profiles have been used to identify relationships between topography and major structures, to compare various climatic, erosional and topographic data across a given orogen and along strike, and to recognize fluvially and glacially eroded forms. An unambiguous correlation between the topographic or climatic metrics and long-term erosion rates was not found. Fitting of linear regression lines to the topographic/climatic metric data and the long-term erosion rates shows that 86 of 288 plots (30%) have 'good' R2 values (> 0.35) and 135 of 288 (47%) have an 'acceptable' R2 value (> 0.2). The 'acceptable' and 'good' values have been selected on the basis of visual fit to the regression line. The majority of the plots with a 'good' correlation value have positive correlations, while 11/86 plots have negative slopes for the regression lines. Interestingly, two topographic profile shapes were clear in swath profiles: Concave-up (e.g., the central-western Himalaya and the northern Bolivian Andes) and concave-down or straight (e.g., the eastern Himalayas and the southern Bolivian Andes). On the orogen scale, the concave-up shape is often related to relatively high precipitation and erosion rates on the slopes of steep topography. The concave-down/straight profiles seem to occur in association of low rainfall and/or erosion rates. Though we cannot say with confidence, the lack of a clear correlation between long-term erosion rates and climate or topography may be due to the difference in their respective timescales as climate can vary over shorter timescales than 10^5-10^7 years. In that case, variations between fluvial and glacial erosion may have overprinted the erosional effects of one another.
  • Virolainen, Savi (2018)
    Erityisesti taloudellisissa ilmiöissä sekä niitä kuvaavissa aikasarjoissa esiintyy usein vaihtelua eri tilojen välillä, esimerkiksi markkinoiden vakauden heilahtelun aiheuttamana. Eri tilojen välillä vaihtelua selittämään kykeneviä aikasarjamalleja ovat muun muassa autoregressiiviset sekoitusmallit. Tällaisia ovat esimerkiksi GMAR-malli (Gaussian Mixture Autoregressive) ja StMAR-malli (Student's t Mixture Autoregressive), joihin perustuen tutkielmassa esitetään molempien piirteitä hyödyntävä G-StMAR-malli (Gaussian and Student's t Mixture Autoregressive). Autoregressiiviset sekoitusmallit voidaan ajatella kokoelmaksi lineaarisia autoregressiivisiä malleja, joista kutakin kutsutaan mallin komponentiksi. Kunkin komponentin ajatellaan kuvaavan kutakin ilmiössä esiintyvää tilaa. GMAR-mallissa komponenttien oletetaan olevan normaalisia autoregressiivisiä prosesseja, kun taas StMAR-mallissa ne ovat t-jakaumaan perustuvia, ehdollisesti heteroskedastisia autoregressiivisiä prosesseja. StMAR-mallin komponenttien ehdollisen varianssin riippuvuus samoista parametreista kuin ehdollinen odotusarvo voi kuitenkin olla rajoittava tekijä tapauksissa, joissa komponenttikohtainen ehdollinen odotusarvo on vahva, mutta ehdollinen varianssi heikko. Tästä syystä StMAR-malli yleistetään tutkielmassa G-StMAR-malliksi sallimalla osan sen komponenteista perustuvan GMAR-mallin käyttämiin normaalisiin autoregressiivisiin prosesseihin, joissa ehdollisen varianssin oletetaan olevan vakio. Tutkielmassa esitellään GMAR-malli ja StMAR-malli, ja määritellään niiden pohjalta G-StMAR-malli. Lisäksi osoitetaan, että GMAR-mallin ja StMAR-mallin houkuttelevat teoreettiset ominaisuudet, kuten ergodisuus ja stationaarisen jakauman tunteminen, periytyvät ilmeisellä tavalla myös G-StMAR-mallille. Mallien esittelemisen jälkeen tutkielmassa kerrotaan lyhyesti, kuinka esitetyt mallit voidaan estimoida kaksivaiheista menetelmää käyttäen, miten malleille voidaan valita sopivat asteet, kuinka kvantiiliresiduaaleja voidaan hyödyntää mallin sopivuuden tarkastelemisessa ja miten taustalla olevan prosessin tulevia havaintoja voidaan ennustaa simulaatiomenettelyllä. Tutkielman empiirisessä osiossa tutkitaan, millaisiksi G-StMAR-mallin parametrit estimoituvat pohjana olevaan StMAR-malliin verrattuna, ja lisäksi mallien ennustetarkkuuksia vertaillaan toisiinsa. Esimerkkiaineistona käytetään Standard & Poor's 500 osakemarkkinaindeksin päivittäistä volatiliteettia kuvaavaa, ajanjakson 3.1.2000-20.5.2016 kattavaa aikasarjaa. Tutkielman tulosten perusteella StMAR- ja G-StMAR-mallien ennustetarkkuuksien välillä ei voida sanoa olevan juurikaan eroa, mutta joissakin tapauksissa voidaan G-StMAR-malliin siirtymällä välttää StMAR-mallin parametrien estimaatteja koskevia ongelmia.
  • Karttunen, Henri (2015)
    Aikasarjoissa ilmenevien ei-normaalisten piirteiden mallintamiseen voidaan käyttää epälineaarisia aikasarjamalleja, joista erityisesti tutkielmassa tarkastellaan autoregressiivisia sekoitusmalleja. Autoregressiiviset sekoitusmallit määritellään sekoituksena lineaarisista autoregressiivisista malleista ja erona eri sekoitusmallien välillä on niiden sekoitussuhteiden määrittely. Autoregressiivisella GMAR (Gaussian Mixture Autoregressive)-sekoitusmallilla on houkuttelevia teoreettisia ominaisuuksia, sillä sen stationaarinen jakauma tunnetaan ja sen stationaarisuusehto ja ergodisuus voidaan johtaa ilman lisärajoituksia parametreille. Kuitenkin sekoitussuhteiden monimutkaisesta määrittelystä johtuen sen parametrien estimointi käyttäen kirjallisuudessa usein käytettyä EM-algoritmia on hankalaa. Tästä syystä tutkielmassa selvitetään mahdollisuutta käyttää parametrien estimoinnissa kaksivaiheista menetelmää, jossa geneettisen algoritmin avulla etsitään alkuarvoja gradienttiperusteiselle optimointialgoritmille. Parametrien estimoinnin lisäksi tutkielmassa tarkastellaan mallinvalintaa osana estimointiprosessia. Tarkasteltavia työkaluja sopivan mallin etsinnässä ovat informaatiokriteerit sekä erilaiset kvantiiliresiduaaleihin perustuvat testit, joiden avulla voidaan tehdä mallidiagnostiikkaa tavallisten residuaalien tapaan myös silloin, kun tavallisia residuaaleja ei voida käyttää. Lisäksi tarkastellaan ennusteiden laskemista simulaatioiden avulla ja esitetään miten GMAR-mallia voidaan simuloida. Tutkielman empiirisessä osassa tarkastellaan kahta esimerkkiä, joista ensimmäisessä keskitytään estimointiin, mallinvalitaan ja diagnostiikkaan. Tässä esimerkissä aineistona käytetään yhdysvaltain kuukausittaista inflaatiota vuodesta 1975 vuoteen 2015. Toisessa empiirisessä esimerkissä tarkastellaan tuulen nopeuksia päivittäisen aineiston avulla ja keskitytään erityisesti ennusteiden laskemiseen. Tuulen nopeutta mittaava aineisto on ei-negatiivinen aikasarja ja siksi esimerkissä tarkastellaan estimointia logaritmoidun sarjan avulla ja alkuperäisen sarjan ennustamista. Tutkielman tulosten perusteella kaksivaiheinen estimointi käyttäen geneettistä algoritmia toimii GMAR-mallin tapauksessa hyvin ja kohtuullisessa ajassa.
  • Lindroos, Linda (2013)
    III-V nanowires (NWs) are emerging as a new class of interesting nanostructures that hold promising potential for future generation electronic and optoelectronic devices. NWs are most often grown epitaxially via the vapor-liquid-solid (VLS) mechanism, which allows, in ideal case, an accurate control of diameter, length, crystal phase, doping concentration and junction formation. The biggest challenge in the field of NWs is, however, the controlled growth and understanding of the underlying mechanism. The aim of the experimental part in this work was to optimize the growth parameters and to improve the optical properties of Au assisted growth of InP NWs on silicon and glass substrates using atmospheric pressure metal-organic vapor phase epitaxy (MOVPE). The growth parameters of InP NWs on Si (111) substrate were optimized in terms of substrate pre-treatments, growth temperature, V/III ratio, and Zn doping. Consistent with the earlier results, all the growth parameters were found to affect the morphological and optical quality of the NWs. In addition, the NWs exhibited a strong quantum confinement for diameters below 20 nm. To improve the optical properties, surface passivation of InP NWs was investigated. Core-shell structures were formed both in situ with MOVPE and ex situ with atomic layer deposition (ALD). Growth of InP NWs on low-cost glass substrates was also studied and encouraging results were achieved. The results in this work can be used as a basis for further studies of more complex InP-based NWs.
  • Salminen, Samu (2016)
    Tämä Pro Gradu -tutkielma käsittelee työntekijän eläkelain (TyEL) nykyisin käytössä olevan Gompertz-kuolevuusmallin sopivuutta kuvaamaan TyEL:n selektiä kuolevuutta vanhuuseläkeliikkeessä. Tutkielman tarkoitus on nostaa esiin nykymallin ongelmakohdat ja esitellä Gompertz-kuolevuusmallin laajennus, joka ainakin osin korjaisi nykymallin puutteita pysymällä kuitenkin järjestelmätekniseltä kannalta tarpeeksi yksinkertaisena. Keskeisiksi teemoiksi tutkielmassa nousee elämänvaravakuutuksen yleisen teorian sekä työntekijän eläkelain vanhuuseläkeliikkeen vakuutustekniikan lisäksi toteutuvan kuolevuuden ennustaminen Lee-Miller-mallin sovelluksella ja laajennetun Gompertz-kuolevuusmallin parametrien estimointi havaintoaineistosta. Havaintoaineistona tutkielmassa käytetään sekä Tilastokeskuksen väestökuolevuuksia että TyEL:n riskiperusteanalyysien mukaisia toteutuneita rahapainotettuja kuolevuuksia. Ennustemalli on laadittu käyttäen R- ja Excel-ohjelmistoja (ml. VBA). Kuolevuusmallin laajennusta on kehitetty työeläkejärjestelmässä vuodesta 2014 laskuperusteasiain neuvottelukunnan alaisessa kuolevuusperustejaoksessa, jonka sihteerinä tutkielman kirjoittaja on toiminut vuosina 2014 ja 2015. Puheenjohtajayhtiönä kyseisinä vuosina on toiminut Keskinäinen Työeläkevakuutusyhtiö Elo ja puheenjohtajana tämän tutkielman toinen tarkastaja Tuomas Hakkarainen. Kuolevuusperustejaoksessa on edustus jokaisen työeläkeyhtiön lisäksi eläkekassoilla ja eläkesäätiöillä, Sosiaali- ja terveysministeriöllä, Eläketurvakeskuksella sekä Kevalla (julkisen puolen eläkkeet). Kuolevuusmallin valinnalla ja osuvuudella on merkitystä vanhuuseläkeliikkeessä, sillä se määrää pääoma-arvokertoimet, joilla varaudutaan vastaisten ja alkaneiden vanhuuseläkkeiden suorituksiin tulevaisuudessa. Tutkielmassa esitelty uusi kuolevuusmalli otetaan käyttöön vuoden 2017 eläkeuudistuksen yhteydessä, eli ensimmäisen kerran vanhuuseläkeliikkeen vanhuuseläkevastuut lasketaan sen mukaisina vuoden 2016 lopussa. Vanhuuseläkemaksu määräytyy uuden mallin mukaisesti vuodesta 2017 alkaen.
  • Anttila, Susanna (2015)
    The purpose of this study is to establish the readiness of high school geography teachers in teaching information and communication technology (ICT), with a particular focus on teaching the use of Google Earth. The aim was to acquire the first research results on the use of Google Earth in teaching, user experiences, and factors that may increase its use. Furthermore, the purpose of the game experiment was to raise awareness of the suitability of Google Earth for teaching, and of the possibilities of using games in high school geography tuition. The skills and attitudes of Finnish high school geography teachers (n = 103) were charted using an electronic survey. The game trial (n = 31) was implemented as a field experiment in a first year geography class at a high school in Helsinki. Two of the teachers teaching the course were also interviewed. The results were analysed using a statistical analysis program and presented through graphs and citations. Based on the study, the teachers' IT skills were good, but heterogeneous. Teacher training had not equipped the teachers sufficiently in using technology; instead, their practical skills had been acquired through self-study, supplementary training, and peer support. The majority of the teachers used Google Earth in their classes. Google Earth is ideal for high school lessons, particularly for teaching geographical thinking and geoinformatics, due to its ease of use, versatility, and the fact that it is free of charge. Its use was however very occasional and mostly teacher-led. Student directed use was most prominent during the regional studies (GE4) course. The Google Earth learning game on vegetation zones increased student motivation and positively influenced the learning process. Teachers and students alike were interested in using Google Earth as one of the high school geography learning environments in the future. Geography teachers expressed a need for supplementary training that is practical in nature, and for ready-made teaching ideas and exercises to increase the use of Google Earth. To enable this, supplementary training must be equally developed in all parts of Finland, and the applicability of today's teacher training to the demands of the job of a teacher should be reviewed. The new national high school curriculum, the soon-to-be electronic matriculation examinations as well as the new distribution of lessons enables discussion on the central content and objectives of geography tuition, especially on the portion of geoinformatics as well as ICT.
  • Lipsunen, Werner (2023)
    This thesis examines the implementation of general purpose graphics processing unit (GPGPU) acceleration to a non-equilibrium Green’s function (NEGF) equation solver in the context of a computational photoelectrochemical (PEC) cell model. The goal is to find out whether GPGPU acceleration of the NEGF equation solver is a viable option. The current model does not yet have electron-photon scattering, but from the results it is possible to assess the viability of GPGPU acceleration in the case of a complete PEC cell model. The viability of GPGPU acceleration was studied by comparing the performance difference of two graphics processing unit (GPU) solutions against a multi core central processing unit (CPU) solution. The difference between the two GPU solutions was in the used floating-point precision. The GPU solutions would use LU factorization to solve the NEGF equations, and the CPU solution a banded solver (Gauss tridiagonal) provided by Scipy Python package. The performance comparison was done on multiple different GPU and CPU hardware. The electrical transport properties of the PEC cell were modeled by a self-consistent process in which the NEGF and Poisson equations were solved iteratively. The PEC cell was described as a semiconductor device connected with a metal and electrolyte contacts. The device was assumed to be a simple one dimensional tight-binding atom chain made of GaAs, where the transverse modes in the y–z plane are treated with a logarithmic function. The computational model did lack electron-photon scattering, which would be implemented in the future. From the benchmark results, it can be concluded that the GPGPU acceleration via LU factorization is not a viable option in the current code or in the complete model with electron-photon scattering and the assumed approximations. The parallel multi-core CPU code generally outperformed the GPU codes. The key weakness of the GPU code was the usage of LU factorization. Despite of this, there could be an opportunity for GPGPU acceleration if a more complex lattice structure and more exact scattering terms would be used. Also, a GPU accelerated tridiagonal solver could be a possible solution.
  • Tompuri, Seppo (2014)
    Tietokonepelit on kehitetty perinteisesti joko pöytätietokoneille tai pelikonsoleille. Räjähdysmäisesti kasvaneet mobiilipelimarkkinat ovat kuitenkin haastaneet nämä pelialustat. Mobiilipelien kehitys tapahtuu usein samalla tyylillä kuin pöytätietokoneille ja pelikonsoleille, vaikka niistä löytyy pöytätietokoneista ja pelikonsoleista poikkeavaa tekniikkaa, joka mahdollistaa perinteisestä poikkeavan käyttäjäsyötteen. Nykyaikaisista mobiililaitteista löytyy muun muassa erilaisia ympäristöä tarkkailevia sensoreita sekä usein myös GPS-vastaanotin. GPS-vastaanotin tarjoaa pelin käyttöön pelaajan sijaintitiedon. Sensorit puolestaan tarkkailevat mobiililaitteen ympäristöä, kuten mobiililaitteen kiihtyvyyksiä kolmessa ulottuvuudessa. Sensoreilta saatua dataa voidaan käyttää käyttäjäsyötteessä joko suoraan tai muokattuna. Niiltä saadun datan avulla on myös mahdollista päätellä pelaajan tekemät eleet, jotka voidaan hahmontunnistuksen avulla sitoa osaksi pelin käyttäjäsyötettä. Tällainen mobiililaitteiden mahdollistama uudenlainen käyttäjäsyöte mahdollistaa uudenlaisia peligenrejä, jotka käyttävät pelaajan liikettä ja sijaintitietoa osana pelimekaniikkaa. Tämä työ esittelee ne pelimoottorin osat jotka ovat mukana GPS- ja sensoridatan keräämisessä ja käsittelyssä sekä esittelee tässä työssä suunnitellun GPS- ja sensoridataa hyödyntävän arkkitehtuurimallin mobiililaitteille. Työssä perehdytään aluksi aiheeseen liittyviin käsitteisiin sekä pelin reaaliaikaisuudesta huolehtivaan pelisilmukkaan ja sen erilaisiin arkkitehtuurimalleihin. Tämän jälkeen käydään läpi pelimoottorien suoritusaikaisesta arkkitehtuurista ne puolet, jotka liittyvät GPS- ja sensoridatan keräämiseen. Lopuksi esitellään ja arvioidaan tässä työssä suunniteltu arkkitehtuurimalli GPS- ja sensoridatan hyödyntämiselle mobiililaitteiden pelimoottoriarkkitehtuurissa. Työn ohessa tämän ehdotetun mallin toimivuus todennetaan Windows Phone 8 laitealustalle tehdyllä toteutuksella ja tämän toteutuksen lähdekoodia käytetään apuna ehdotetun mallin esittelyssä.
  • Tyrväinen, Lasse (2016)
    Learning a model over possible actions and using the learned model to maximize the obtained reward is an integral part of many applications. Trying to simultaneously learn the model by exploring state space and maximize the obtained reward using the learned model is an exploitation-exploitation tradeoff. Gaussian process upper confidence bound (GB-UCB) algorithm is an effective method for balancing between exploitation and exploration when exploring spatially dependent data in n-dimensional space. The balance between exploration and exploitation is required to limit the amount of user feedback required to achieve good prediction result in our context-based image retrieval system. The system starts with high amount of exploration and — as the confidence in the model increases — it starts exploiting the gathered information to direct the search towards better results. While the implementation of the GP-UCB is quite straightforward, it has time complexity of O(n^3) which limits its use in near real-time applications. In this thesis I present our reinforcement learning image retrieval system based on GP-UCB, with the focus on speed requirements for interactive applications. I also show simple methods to speed up the algorithm running time by doing some of the Gaussian process calculations on the GPU.
  • Jylhä-Ollila, Pekka (2020)
    K-mer counting is the process of building a histogram of all substrings of length k for an input string S. The problem itself is quite simple, but counting k-mers efficiently for a very large input string is a difficult task that has been researched extensively. In recent years the performance of k-mer counting algorithms have improved significantly, and there have been efforts to use graphics processing units (GPUs) in k-mer counting. The goal for this thesis was to design, implement and benchmark a GPU accelerated k-mer counting algorithm SNCGPU. The results showed that SNCGPU compares reasonably well to the Gerbil k-mer counting algorithm on a mid-range desktop computer, but does not utilize the resources of a high-end computing platform as efficiently. The implementation of SNCGPU is available as open-source software.