Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Hallman, Merrit (2015)
    Removal of introns, the non-coding sequences, from the mRNA is essential step of all eukaryotic gene expression. U12-type introns are a minor subgroup of introns that are extremely rare, but present in most eukaryotes. These introns are removed by a U12-dependent spliceosome often referred to as the minor spliceosome. U12-type introns and the minor spliceosome are highly conserved throughout all taxa. This study maps out the frequency and variation of the U12-type introns and U12-dependent spliceosome in domestic dog. This study uses genome sequencing data of 145 dogs representing multiple breeds. The data is in Variant Call Format and analyzed with Python scripts. This study yields expected results, the frequency and variation is similar to that of human. A more focused data set should be used to study further any pathological traits of the U12-type spliceosomal system.
  • Sorvali, Jaana (2015)
    The main objective of this thesis was to examine climate change discourses and their meaning to the future of Finland. The study focused on the political level discourses. The national review was made by studying the discussions held at the Parliament of Finland on 10.6.2014 and 25.2.2015 concerning the Climate Act. A shorter regional review was made based on interviews of the Regional Mayors in Finland. The principal method in the thesis was data-driven, qualitative content analysis. The Climate Act related discussions by the Members of the Parliament were analysed by looking into the contents of the arguments and the material from the Regional Mayor interviews were analysed by looking more into the signifiers that were attached to climate change. Another method used in the thesis was a theory-driven one based on the environmental discourses by John S. Dryzek. This method was used to analyse the Climate Act discussions and the purpose was to find out if the Finnish climate change related discussion contains any of the environmental discourses proposed by Dryzek in his theoretical studies. One aim of the thesis was also examine if any possibilities have been incorporated into the climate discussions. The analysis based on the data-driven method found five thematic groups of arguments from the speeches of the MP's supporting the Climate Act. The arguments focused on the content and targets of the Act, the institutionalisation of climate issues, the meaning of the Climate Act to national climate policy, strengthening the legitimacy of the Climate Act and the possibilities of climate change. The MP's that voted for the acceptance of the Climate Act in the Finnish Parliament also brought out concerns regarding the Act. The arguments containing these concerns were grouped into four thematic groups: Arguments illustrating the effects of the Climate Act to national economical success, the doubts concerning the effectiveness of national climate policy, the content of the Climate Act and the administrative issues. The MP's who voted against the Climate Act used the same thematic arguments, that the ones expressing their concerns. MP's attached manifold signifiers to climate change, and these were also examined in the thesis. The discussions concerning the Climate Act were also analysed from the political party perspective with both the data- and theory-driven methods. The analysis found that there were clear differences in the emphasis of the different arguments between parties. The analysis of the Regional Mayor interviews was made by the data-driven method by looking and analysing the signifiers attached to climate change. The signifiers were grouped into three groups according to the placement given to climate change in regional actions. Climate change was defined as either a central, a crosscutting or a secondary theme. Some of the Regional Mayors also described the public opinion concerning climate change in the region and this was also examined shortly in the thesis. The possibilities linked to climate change were almost without exception connected to the industrial policy success factors of the regions. The main results of the thesis were the questioning of the consensus regarding the approval of the Finnish Climate Act, the restrained reformist goals of national climate policy, the dual nature of climate policy as a threat and as a possibility, the division of the Finnish regions by the willingness to execute climate actions in the regional level and first and foremost the connection between climate policy and Finnish wellbeing and economical success. Based on the results presented in the thesis a theory of the interconnectedness of climate policy and economical success was developed. The theory aims at explaining the contents of the Finnish political climate discourses from a new perspective.
  • Koskela, Joonas (2017)
    Ukkosen voimakkuuden luokittelusta on tehty vain vähän tutkimusta. Rajuilman voimakuutta voidaan määritellä kaikkien siihen liittyvien sääilmiöiden (rankkasade ja rakeet, salamointi, syöksyvirtaukset, trombit) perusteella. Usein voimakkuusluokitus tehdään ilmiön aiheuttamien tuhojen perusteella (syöksyvirtaukset, trombit), joka vaatii ihmishavainnot tuhoalueelta: tällainen luokitus ei ole reaaliaikainen. Ukkosen tapauksessa mielekäs ilmiö voimakkuuden luokittelemiseksi on salamointi. Salamoinnin perusteella on tehty aiemminkin ukkosen voimakkuusluokitusta, mutta luokitukset perustuvat salamoiden vuosi- tai vuorokausimääriin. Hetkellisen ukkosen voimakkuuden määrittelemiseksi on tässä työssä tutkittu maasalamamääriä Suomen alueella 15 minuutin aikana 20 x 20 km havaintoalueilla. Tämän työn tarkoitus oli selvittää, että voidaanko salamahavaintojen perusteella tehdä objektiivinen luokitus ukkosen hetkelliselle voimakkuudelle, sekä mikä on järkevä luokkajako voimakkuudelle. Tutkimus toteutettiin käymällä läpi NORDLIS-salamanpaikannusverkon havainnot vuosilta 2002–2016. Havaintoaineistosta lasketaan maasalamamäärät 15 minuutin aika-askelissa 2 584 hilaruudussa Suomen alueella ja sen lähiympäristössä. Havaitut salamamäärät vaihtelivat välillä 1–325 salamaa 400 km^-2 15 min^-1. Saaduista tuloksista lasketaan ukkosen hetkellisen salamoinnin todennäköisyysjakauma. Tämän työn tulosten pohjalta on määritetty ukkosen hetkelliselle voimakkuudelle logaritmisesti tasavälinen, 5-portainen luokitus, jossa alin luokka L1 (1–3 maasalamaa 400 km^2 15 min^-1) kuvaa heikkoa ukkosta ja ylin luokka L5 ( >100 maasalamaa 400 km^2 15 min^-1 ) erittäin rajua ukkosta. Määritettyä ukkosen hetkellisen voimakkuuden luokitusta voidaan käyttää Suomen oloja vastaavilla alueilla, mikäli salamahavaintoja saadaan riittävällä tarkkuudella. Alueilla, joilla ukkoset ovat selvästi Suomen oloja voimakkaampia tämä luokitus antaa hetkelliselle ukkosen voimakkuudelle luultavasti liian alhaisia arvoja.
  • Asmi, Ari (Helsingin yliopistoUniversity of HelsinkiHelsingfors universitet, 2000)
  • Alila, Antti (2015)
    Tutkielmassa selvitetään tilastollisten menetelmien avulla ulkolämpötilan vaikutusta sydän- ja verisuonitautien kiireellistä hoitoa vaativien sairaustapausten määrään. Aiemmassa tutkimuksessa on todettu, että sydän- ja verisuonisairauksien kuolleisuus lisääntyy sekä kuumalla että kylmällä säällä. Sairaustapausten osalta tutkimusnäyttö ei sen sijaan ole yksiselitteistä, joten uudelle tiedolle on perusteltu tarve. Ilmiön tutkiminen auttaa suunnittelemaan toimenpiteitä kylmyyden tai kuumuuden aiheuttamien terveyshaittojen ehkäisemiseksi sekä ennakoimaan vaihteluja terveyspalveluiden tarpeessa. Sairaustapauksia kuvaavana aineistona on aikasarja päivystyksen kautta alkaneiden sydän- ja verisuonitautien (ICD-10 tautiluokituksen luokka I) hoitojaksojen vuorokausimäärästä Helsingin ja Uudenmaan sairaanhoitopiirissä. Aineisto saatiin Terveyden ja hyvinvoinnin laitokselta terveydenhuollon hoitoilmoitusrekisteristä. Lisäksi analyysissa käytetään European Climate Assessment & Dataset -palvelusta saatua aikasarjaa vuorokauden keskilämpötilasta sairaanhoitopiirin alueella. Molemmat aineistot kattavat vuodet 1987–2012. Lämpötilan ja hoitojaksojen määrän yhteyden selvittämiseksi aineistoon sovitetaan Poisson-regressiomalli. Estimoinnissa käytetään suurimman uskottavuuden estimaattoria ja mallinnuksessa huomioidaan mahdollisen ylihajonnan esiintyminen. Selitettävänä muuttujana on hoitojaksojen määrän logaritmi. Selittävinä tekijöinä ovat vuorokauden keskilämpötilan viivästetyt arvot 15 vuorokautta taaksepäin. Lämpötilan vaikutus oletetaan kahdessa osassa lineaariseksi. Koska vuorokausilämpötilojen aikasarja on voimakkaasti autokorreloitunut, tilastolliseen malliin asetetaan lämpötilan viivästettyjen arvojen regressioparametreille polynomirajoite. Näin malliyhtälöön jää vähemmän vapaita parametreja, ja lämpötilan viivästyneen vaikutuksen estimointi on helpompaa. Tilastollisen mallin muut selittävät tekijät ovat hoitojaksojen pitkän aikavälin trendiä kuvaava splinifunktio ja hoitojaksojen aikasarjan porraskohtia kuvaavat indikaattorimuuttujat sekä kuukautta, viikonpäivää ja vuosittain toistuvia juhlapäiviä kuvaavat indikaattorimuuttujat. Keskeinen tulos on, että sydän- ja verisuonitautien sairaustapausten määrä vähenee ulkolämpötilan noustessa. Esimerkiksi -20 asteen lämpötilassa hoitojaksojen määrän odotusarvo on 13,4 % suurempi kuin +20 asteessa. Paloittain lineaarisen mallin taitepisteen (16,5 astetta) alapuolella hoitojaksojen määrä laskee 0,23 % (95 %:n luottamusväli 0,11–0,35 %) lämpötilan noustessa yhdellä asteella. Taitepistettä korkeammissa lämpötiloissa yhden asteen nousu laskee sairaustapausten määrää 1,19 % (luottamusväli 0,68–1,72 %). Kylmyyden sairaustapauksia lisäävä vaikutus näyttää toteutuvan viivästyneesti vaikutuksen ollessa suurimmillaan 7–10 vuorokauden viiveellä. Tulosten perusteella ulkolämpötilan aleneminen lisää sairastavuutta ja terveyspalveluiden tarvetta sydän- ja verisuonisairauksissa. Toisaalta tulokset nostavat esiin ristiriidan, joka on havaittu muutamissa aiemmissakin tutkimuksissa: sydän- ja verisuonitautien hoitojaksojen määrä laskee kuumalla säällä, vaikka kuolleisuus samalla lisääntyy. Tämä saattaa viitata siihen, että kuumuus vaikuttaa ihmisen elimistöön eri tavoilla, joista osa on terveyden kannalta myönteisiä ja osa haitallisia. Tulokset antavat perusteen pyrkiä kylmään säähän liittyvien sairaustapausten ehkäisemiseen sekä selvittää kuumuuden terveysvaikutuksia tarkemmin.
  • Laiho, Satu (2017)
    Uloshengityksen on jo kauan tiedetty kertovan hengittäjän terveydestä. Hengityksessä onkin arvioitu olevan tuhansia pelkästään endogeenista alkuperää olevia yhdisteitä, mutta vain muutamia niistä on tutkittu riittävästi, jotta niiden käyttäminen kliinisessä diagnostiikassa on tullut mahdolliseksi. Hengitysanalyysin potentiaali on siis edelleen laajalti hyödyntämättä, vaikka sen edut esimerkiksi verianalyysiin nähden ovat ilmeiset. Uloshengitys on näytematriisina yksinkertainen ja sen kerääminen on nopeaa, kivutonta sekä helppoa, ja näytteet on mahdollista analysoida vain muutamissa minuuteissa. Hengitysanalyysin potentiaali diagnostisena menetelmänä perustuu keuhkorakkuloiden ja verenkieron väliseen ohueen kapillaarikalvoon, jonka läpi monet verenkierron yhdisteet pääsevät haihtumaan suoraan uloshengitykseen. Monilla yhdisteillä hengityksessä esiintyvät pitoisuudet heijastavat siis suoraan veren yhdistepitoisuuksia. Yksi kiinnostusta herättänyt potentiaalinen biomerkkiaine on ammoniakki, joka on yhdistetty muun muassa munuaisten vajaatoimintaan ja veren kohonneisiin ureapitoisuuksiin. Poikkeuksellisen emäsluonteensa takia, hengityksen ammoniakin erittymismekanismin on kuitenkin arveltu poikkeavan muista endogeenisista yhdisteistä, ja yhdisteen on epäilty päätyvän hengitykseen veren sijasta syljestä haihtumalla. Epäselvyys ammoniakin tuottomekanismista onkin pitkään hidastanut yhdisteen analyysinpotentiaalin ymmärtämistä. Hengityksen lisäksi myös sylkianalyysit ovat herättäneet huomioita syljen helpon kerättävyyden ja erityisen koostumuksen ansiosta. Erityisesti syljen ureapitoisuudet on yhdistetty hengityksen ammoniakin tapaan veren ureapitoisuuksiin ja munuaisten toimintaan, minkä takia myös syljen ureaa on pidetty potentiaalisena biomerkkiaine. Tutkielmassa käsitellään hengityksen ammoniakin sekä syljen ammoniakin ja urean muodostumista sekä arvioidaan niiden yhteyttä elimistön ammoniakki- ja ureapitoisuuksiin. Lisäksi tutkielma käsittelee hengityksen ja syljen mittauksessa huomioon otettavia asioita sekä itse mittauksien suoritusta. Tutkielman tutkimusosuutta varten 12:sta dialyysiin osallistuvalta munuaisten vajaatoimintapotilaalta mitattiin hengityksen ja syljen ammoniakkipitoisuutta sekä veren ja syljen ureapitoisuutta hoidon ajan. Tuloksien perusteella pyrittiin arvioimaan hengityksen ammoniakin alkuperää sekä syljen urean ja hengityksen ammoniakin kykyä mitata munuaisten vajaatoimintapotilaiden ureemista tilaa sekä dialyysihoidon tehoa.
  • Pekkarinen, Antti (2016)
    Kaksiulotteiseen harmaasävykuvaan eli B-kuvaan perustuvat ultraäänitutkimukset ovat kaikista lääketieteellisistä ultraäänitutkimuksista yleisimpiä. B-kuvan tutkimusten kuvanlaatua heikentävät laitteistoviat voivat pahimmillaan johtaa väärään diagnoosiin. Ultraäänilaitteistolle on perusteltua tehdä laadunvarmistusta, jotta laitteistoviat voitaisiin havaita ennen kuin ne heikentävät B-kuvanlaatua kliinisesti merkittävällä tavalla. B-kuvan laadunvarmistusmenettely voi perustua testikappalekuvien visuaaliseen analyysiin ja laitteiston mittaustyökaluilla manuaalisesti tehtyihin mittauksiin. Visuaaliseen analyysiin ja manuaalisiin mittauksiin liittyy kuitenkin merkittävää, mittaajasta riippuvaa epätarkkuutta, minkä lisäksi niihin kuluu runsaasti työ- ja laitteistoaikaa. Vaihtoehtona kuvien visuaaliselle analyysille on olemassa analyysiohjelmistoja, joita käyttämällä voidaan mahdollisesti parantaa menettelyn objektiivisuutta, nopeuttaa mittauksia ja kuvien analyysia sekä helpottaa tulosten visualisointia ja raportointia. Tämän työn tavoitteena on perehtyä ultraäänilaitteistojen B-kuvan laadunvarmistukseen ja koota tutkittua tietoa siitä, millainen optimaalisen laadunvarmistusmenettelyn tulisi olla. Kirjallisuuskatsauksen antamaa kuvaa aiheesta täydennetään kokeellisella osuudella, jonka tavoitteena on tutkia ohjelmistolliseen kuvien analyysiin perustuvan menettelyn soveltuvuutta suuren laitteistojoukon laadunvarmistukseen. B-kuvan visuaaliseen ja ohjelmistolliseen analyysiin perustuvia laadunvarmistusmenettelyjä vertailtiin analysoimalla yhteensä 32 eri laitteistoon kuuluvien 74 eri anturin vuosittaisten laadunvarmistusmittausten kuva-aineisto molemmilla menetelmillä ja vertailemalla tuloksia visualisointisyvyyden sekä aksiaalisen ja lateraalisen erotuskyvyn osalta. Ohjelmistolliseen analyysiin perustuvan menettelyn optimoimiseksi tutkittiin kuva-alan tasaisuuden määrittämiseen käytettävien testikappalekuvien määrän sekä analyysissa käytetyn keskiarvokuvan kuva-alueen rajauksen vaikutusta tasaisuustuloksiin. Mittauksiin sopivien laitteistoasetusten määrittämiseksi tutkittiin lisäksi vahvistuksen ja kuvatason fokusalueen syvyyden vaikutusta ohjelmistollisesti määritettyyn kuuden desibelin vaimennuksen syvyyteen. Rinnakkaisanalyysin visualisointisyvyys- ja resoluutiotuloksissa havaittiin runsaasti hajontaa saman anturimallin eri anturiyksilöiden välillä. Myös eri menetelmien samalle anturille antamissa tuloksissa oli eroja. Kuva-alan tasaisuuden ohjelmistoanalyysiä varten riittävä tasaisuuskuvien määrä oli tämän työn tulosten perusteella 10 kuvaa. Sopiva suhteellinen kuva-alan rajaus oli anturityypistä riippuen joko 5 % tai 10 %. Kuuden desibelin vaimennussyvyys riippui voimakkaasti käytetyn vahvistuksen arvosta, kun käytetty vahvistus oli pieni. Suurilla vahvistuksen arvoilla vaimennussyvyys pysyi lähes vakiona. Fokussyvyyden tapauksessa kuuden desibelin vaimennussyvyys oli lähes vakio suurilla fokussyvyyden arvoilla ja muuttui jonkin verran fokussyvyyden pienemmillä arvoilla. Johtopäätöksenä todetaan, että saman anturimallin eri anturiyksilöille mitatut visualisointisyvyyden ja aksiaalisen sekä lateraalisen erotuskyvyn arvot eivät ole suoraan vertailukelpoisia keskenään. Myöskään samalle anturille eri menetelmillä mitatut tulokset eivät olleet keskenään vertailukelpoisia. Näin ollen mittaustuloksia kannattaa verrata samalle anturille samalla menetelmällä aiemmin tehtyjen mittausten tuloksiin. Kuvien ohjelmistolliseen analyysiin perustuva menettely on toistettavuudeltaan, objektiivisuudeltaan ja tehokkuudeltaan parempi kuin visuaaliseen analyysiin perustuva menettely. Tulosten pohjalta ehdotan ohjelmistolliseen analyysiin perustuvaa laadunvarmistusmenettelyä.
  • Salonen, Fanny (2017)
    The latest version of liquid chromatography is ultra-high performance (or pressure) chromatography (UHPLC). In the technique, short and narrow-bore columns with particle sizes below 3 µm are used. The extremely high pressure used results in very short analysis times, excellent separation, and good resolution. This makes UHPLC a good choice for steroidal analysis. Steroids are a highly interesting area of study; they can be recognized as biomarkers for several diseases and are a relevant topic in doping testing. In this thesis articles on the topic ‘steroid analysis with UHPLC’, published prior to April 2017, are reviewed. UHPLC is always combined with mass spectrometry (MS) for steroid analysis. The MS utilized is usually of multi-dimension: quadrupole time of flight (QTOF) or triple quadrupole (QqQ). The instrumentation is suitable for both untargeted and targeted analysis. In untargeted studies, the study of changes in the human metabolome has been especially interesting. The articles on targeted studies are usually focused on doping control and quantification of identified biomarkers. The analysis with UHPLC-MS/MS usually provide reliable results with fast analysis time, without complicated sample preparation. Typically, the sample preparation processes can include only protein precipitation, liquid-liquid extraction or solid-phase extraction. UHPLC is also a valuable tool in simple and routine analysis. The separation efficiency is increased by the small plate height and the analysis time can thus be reduced. In this thesis work the technique was utilized for the analysis of food additives. For validation of an UHPLC method the repeatability, trueness, bias, measurement uncertainty and other factors need to be assessed. The experimental part of the thesis is dedicated to describe the development and validation of a method for analysis of five food additives and caffeine. The developed method was partly validated, with the aim to fulfil the needs of the Finnish Customs Laboratory. The optimized method comprised of an injection volume of 2 µL and a flow rate of 1.0 mL/min. The buffer was a phosphate buffer at pH of 4.0 and the gradient elution program was from 6 % to 30 % of acetonitrile in 1.6 minutes, then 1.6-1.7 minutes with 6% acetonitrile. The total run time was only 1.7 minutes. The limit of detection values was between 0.02 µg/mL and 1.73 µg/mL. The limit of quantitation values was between 0.054 µg/mL to 5.78 µg/mL, which should be sufficient for the Customs needs in the sense of checking if a product is over a certain limit. Expanded measurement uncertainties were around 20 %.
  • Meriläinen, Antti Iisakki (2013)
    Coded signals are widely used in ultrasound microscopy and they can improve signal to noise ratio and imaging quality. However, so far all reported solutions has limited bandwidth under 135 MHz. We present a solution for ultrasonic microscopy that is capable of dealing with coded GHz signals. The ultrasound microscope features a high frequency arbitrary signal source, a solid state switch, a microcontroller trigger device, and a preamplifier. Our signal source based on quadrature I-Q modulator modulates a high frequency signal with low frequency arbitrary signals and produces a high frequency arbitrary signal. This signal source gives more than 200 MHz bandwidth in center frequency range of 0.2 - 1 GHz that can be used for coded signals. For pulse echo operation we use commercial 3 GHz solid state switches that handle 15 Vpp signals and that are controlled by an Atmel microcontroller. The preamplifier is a custom built low noise preamplifier (3.3 dB noise figure) with broadband, 0.01 - 1.2 GHz. Originally the components was developed for wireless networking. We performed electric tests on our devices with several signals whose frequency range was 0.1 - 1.1 GHz. We also validated our device with a custom built ultrasound immersion microscope by imaging a 5 µm tall MEMS step structure using 100 - 300 MHz, 3 µs, 10.5 Vpp linear and non-linear chirp excitation. This device allows one to use high frequency coded signals and it advances the state-of-art of ultrasonic microscopy.
  • Kivimäki, Juhani (2022)
    In this thesis, we give an overview of current methodology in the field of uncertainty estimation in machine learning, with focus on confidence scores and their calibration. We also present a case study, where we propose a novel method to improve uncertainty estimates of an in-production machine learning model operating in an industrial setting with real-life data. This model is used by a Finnish company Basware to extract information from invoices in the form of machine-readable PDFs. The solution we propose is shown to produce confidence estimates, which outperform the legacy estimates on several relevant metrics, increasing coverage of automated invoices from 65.6% to 73.2% with no increase in error rate.
  • Heiskanen, Lauri (2017)
    This thesis is a study of the uncertainties related to the eddy covariance measurement technique on a forest ecosystem that is located in Hyytiälä, Southern Finland. The aim of this study is to analyze carbon dioxide and energy fluxes measured at two vertically displaced eddy covariance set-ups. In particular, to determine if the observed deviations between the set-ups could be linked with micrometeorological or biological variations or if they are resulted just by the stochastic nature of turbulence. The magnitude of uncertainties linked to eddy covariance technique are still under discussion and this thesis attempts to shed a light on these questions. The analysis is done to half hourly mean flux and meteorological data that was measured at the Hyytiälä SMEAR II –site in 2015 at the heights of 23.3 m and 33.0 m. Monthly, diurnal and cumulative variations of the fluxes are analyzed. A footprint model is used to analyze the flux correlation with the underlying vegetation. The flux dependence on atmospheric stability is also determined. The analysis shows that the annual cumulative difference of net ecosystem exchange (CO_2 exchange) between the two measurement heights is estimated to be 49 gC m^(-2) year^(-1) (17 % difference). The annual cumulative evapotranspiration difference is estimated to be 105 mm (29 % difference). There are no significant differences between the sensible heat fluxes. The difference between the measurement heights does not seem to influence significantly the flux estimations made with the eddy covariance method. However, the measurement results for latent heat flux acquired from the 33.0 m set-up are continuously smaller than those of the 23.3 m set-up.
  • Lappo, Sampo (2015)
    Econometric microsimulation models that simulate the effects of taxation and social benefit legislation on the disposable incomes of individuals and households are widely used by social scientists and policymakers worldwide. The results produced by these models have a degree of uncertainty arising from multiple sources. One of these is sampling error that is caused by the fact that the simulation is performed on a sample of the total population of interest. However, assessment of the accuracy of results through the estimation of sampling variability caused by this error is still largely absent in the microsimulation literature. The users of econometric microsimulation models are often interested in the values of certain inequality and poverty indicators. This thesis presents variance estimation methods that can be employed to produce variance estimates for these indicators. The main focus is on bootstrap and linearization methods for variance estimation and the indicators considered are the at-risk-of-poverty threshold (ARPT), the at-risk-of-poverty rate (ARPR) and the Gini coefficient. The efficiency of variance estimation methods is tested in a simulative study performed on a data set produced by the SISU microsimulation model developed by Statistics Finland. The methods are also employed in a hands-on case study to help assess the effects of an actual legislative reform simulated by the SISU model. It is found that both bootstrap and linearization methods for variance estimation produce relatively good variance estimates for the indicators considered, with linearization being the more effective of the two. However, high outlier incomes are shown to cause difficulties in the variance estimation of the Gini coefficient with both methods.
  • Bergroth, Claudia (2019)
    Understanding the whereabouts of people in time and space is necessary for unraveling how our societies function. Regardless, our understanding of human presence is predominantly based on static residential population data, which is often outdated and excludes certain population groups, such as commuters or tourists. In the light of development towards 24-hour societies and the needs for promoting sustainable and equitable urban planning, reliable data of population dynamics are needed. To this end, ubiquitous mobile phones provide an attractive source for estimating the spatiotemporal digital footprints of people. In this study, I set out to investigate 1) the feasibility of three different aggregated network-based mobile phone data – the number of voice calls, data transmission and general network connection attempts – as a proxy for human presence, 2) how does the population distribution vary in Helsinki Metropolitan Area over the course of a regular weekday and 3) the role of temporally-sensitive population data when analysing dynamic accessibility to grocery stores and transport hubs. To my best knowledge, this is the first attempt when mobile phone data is used to reveal population dynamics for scientific purposes in Finland. Mobile phone data collected by the mobile network operator Elisa in 2017–2018 and ancillary data about land cover, buildings and a time use survey were used to estimate the 24-hour population distribution of the Helsinki Metropolitan Area. The mobile phone data were allocated to statistical 250 m x 250 m grid cells using an advanced dasymetric interpolation method and validated against population register data from Statistics Finland. The resulting 24-hour population was used to map the pulse of the city and to introduce the first fully dynamic accessibility model in the study area. The results show that data use is a good proxy for people and outperforms voice calls or overall network connection attempts. During daytime, the static population overestimates the population in residential areas and underestimates the population in work and service areas. In general, the 24-hour population reveals the pulse of a city, which is highlighted especially in the inner city of Helsinki, where the relative share of population of the study area increases by 50 % from the share at night-time to its peak at noon. The results of the case study suggest that integrating dynamic population data to location-based accessibility analysis provides more realistic results compared to static population data, but the significance of dynamic population data depends on the study context and research questions. In summary, aggregated network-driven mobile phone data is a feasible alternative for dynamic population modelling, however, different mobile phone data types vary in representativeness, which should be taken into account when using mobile phone data in research. To this end, critical evaluation of data and transparent data description are essential. Overall, understanding 24-hour societies and supporting sustainable urban planning necessitates dynamic population data, but advancements in data policy and availability are needed to harvest these possibilities. The results of this study also provide new empirical insights of the population dynamics in the study area, which can be used to advance planning and decision making.
  • Aagesen, Håvard Wallin (2021)
    The Nordic region is a connected region with a long history of cooperation, shared cultures, and social and economic interactions. Cross-border cooperation and cross-border mobility has been a central aspect in the region for over half a century. Despite of shared borders and all countries being part of the Schengen Area, providing free movement, little research has been made on the extent of daily cross-border movements and little data exists on the topic. In light of the COVID-19 pandemic, human mobility and cross-border mobility has risen to the top of the political agenda, with new challenges changing cross-border mobility around the world. As an already very connected region, the Nordic region saw a sudden decrease in mobility and areas across borders were suddenly isolated from each other. The spread of the COVID-19 virus and the most important measures to counter the pandemic have been spatial in their nature. Restrictions on mobility and lockdown of regions and countries have been some of the measures set in place at varying degrees in different locations. Understanding the effects of mobility on the spread of COVID-19 and understanding how successful different measures have been is important in handling the ongoing and future pandemics. There is a lack of, particularly quantitative, research that investigates the functional aspects of cross-border mobility in the Nordic region. In addition, a lack of up-to-date, reliable data on human flows between the Nordic countries is missing. Research on the spread and effects of the COVID-19 pandemic in relation to human mobility, is rapidly increasing and being pioneered in conjunction with the developments of the pandemic. Through a lens of human mobility and activity spaces, how the cross-border regions in the Nordics reveal themselves by aggregating movements of individuals are investigated. The aim is to examine how geotagged Twitter data can be used to study cross-border mobility, as well as which functional cross-border areas can be estimated from movements of Twitter users and how these movements have been affected by the COVID-19 pandemic. Twitter data is collected and processed and reveal human mobility flows from before and after COVID-19 travel restrictions were set in place, making the data fit for a correlation analysis with available official commuter statistics. Using a kernel density estimation, estimations of the functional cross-border regions at different spatial levels are conducted, uncovering the spatial extent of functional regions and how human mobility connects regions across national borders. On this basis, movements of Twitter users in two time periods, March 2019 – February 2020 and March 2020 – February 2021, are compated with available statistics from the Nordic region. The results show that Twitter data correlates strongly with official commuter statistics for the region and are a good fit for studying cross-border mobility. Additionally, policy made cross-border regions does not completely overlap with the functional cross-border regions. Although there are many similarities between the policy made and functional cross-border regions, in a functional aspect the regions are smaller than the policy made regions and heavily condensed around large cities. The estimation of functional cross-border regions also show the effect of COVID-19 and measures taken to limit cross-border mobility. The amount of cross-border mobility is severely reduced and the composition of functional regions changes differently for different regions. In general, the spatial extent of cross-border regions reduce and gravitates towards the largest cities on either side of the border. The methods and results developed in this thesis provides an understanding of the dynamics of mobility flows in the Nordic region, and are first steps in increasing the use of novel data sources in cross-border mobility research in the Nordics. Further research into methods for expanding the data basis in the region is needed and further research should be conducted in deepening the understanding of demographic and temporal aspects of functional cross-border regions. Regional planning, tourism, and statistics are all fields that rely on recent, up-to-date data, and the methods for utilizing novel data sources shown in this thesis can mitigate some of the flaws that current data sources have. In combating the spread of the COVID-19 virus, it is of profound importance to understand mobility flows across borders, something that this thesis provides methods and insights to do.
  • Kari, Daniel (2020)
    Estimating the effect of random chance (’luck’) has long been a question of particular interest in various team sports. In this thesis, we aim to determine the role of luck in a single icehockey game by building a model to predict the outcome based on the course of events in a game. The obtained prediction accuracy should also to some extent reveal the effect of random chance. Using the course of events from over 10,000 games, we train feedforward and convolutional neural networks to predict the outcome and final goal differential, which has been proposed as a more informative proxy for outcome. Interestingly, we are not able to obtain distinctively higher accuracy than previous studies, which have focused on predicting the outcome with infomation available before the game. The results suggest that there might exist an upper bound for prediction accuracy even if we knew ’everything’ that went on in a game. This further implies that random chance could affect the outcome of a game, although assessing this is difficult, as we do not have a good quantitative metric for luck in the case of single ice hockey game prediction.
  • Kozar, Oxana (2015)
    The economic development of Northern Ostrobothnia has been in the spotlight because of rapid growth of Oulu high-technology industries. However, most of the researchers seem to concentrate on the development in Oulu without considering the situation in the whole region or its peripheries. The thesis gives an overview of the main factors that affected successful high-tech development in Oulu since 1960s as well the economic and social development in the whole region of Northern Ostrobothnia. Innovations have become a very topical issue in research and policies during the last decades. For this reason the understanding of this term has become vague and they started to be used as buzzwords more and more often. This thesis analyses how innovations are defined in the regional strategic programmes of Norhern Ostrobothnia, what types of innovation activities are emphasized most, which industries are considered as benefiting from innovations. The usage of different territorial innovation models (innovative milieu, cluster, regional innovation system) is also analyzed. The research also examines how different geographical scales of innovation activities are seen in the studied policy documents: how the role of global scale for innovative development in Northern Ostrobothnia is understood, are there any links with national innovation system discussed, how are innovations on local scales treated. The documents chosen for the research are regional strategic programmes for four programming periods: 2004-2006, 2007-2010, 2011-2013 and 2014-2017. The data is analyzed using qualitative content analysis, which allows to systematize and summarize the content of the text and examine in what contexts are the terms used. The analytical approach is theory-driven, however, modified with consideration to the data. The research showed that regional strategic programmes employ broad definition of innovation, especially emphasizing marketing in the documents for the latest programming periods. Innovative activities are regarded not only in context of high-technology industries, but in relation to low- and medium-technology industries as well, though in latest documents the focus is mainly on high-tech production. The term 'cluster' is used mainly as a buzzword, though in the latest regional strategic programme the frequency of use has decreased, alternatively the model of innovative milieu has become more popular. Concept of regional innovation system has been briefly mentioned in the documents. There has been very limited explanation of structures of any of the territorial innovation models, however, the documents linked the concept of regional innovation system to innovative milieu, seeing the first as a next stage of development of the latter. Regional strategic programmes mention the role of innovation processes happening on different geographical levels, but the discussion is rather limited - international scale is mostly considered as global market, national scale is seen as the source of funding and on local scale the attention is mainly payed to Oulu.
  • Zavodovski, Aleksandr (2016)
    Transition to the Digital Television (DTV) has freed up large spectrum bands, known as a digital dividend. These frequencies are now available for opportunistic use and referred to as Television White Space (TVWS). The usage of the TVWS is regulated by licensing, and there are primary users, mostly TV broadcasters, that have bought the license to use certain channels, and secondary users, who can use channels that primary users are not currently utilizing. The coexistence can be facilitated either by spectrum sensing or White Space Databases (WSDBs) and in this thesis, we are concentrating on the latter. Technically, WSDB is a geolocational database that stores location and other relevant transmitter characteristics of primary users, such as antenna height and transmission power. WSDB calculates safety zone of the primary user by applying radio wave propagation model to the stored information. The secondary user sends a request to WSDB containing its location and receives a list of available channels. The main problem we are going to concentrate on is specific challenges that mobile devices face in using WSDBs. Current regulations demand that after moving each 100 meters, the mobile device has to query WSDB, consequently increasing device's energy consumption and network load. Fast moving devices confront the even more severe problem: there is always some delay in communications with WSDB, and it is possible that while waiting for the response the device moves another 100 meters. In that case, instead of using the reply the device has to query the WSDB again. For fast moving devices (e.g. contained inside vehicles) the vicious loop can continue indefinitely long, resulting in an inability to use TVWS at all. A. Majid has proposed predictive optimization algorithm called Nuna to deal with the problem. Our approach is different, we investigate spatiotemporal variations of the spectrum and basing on over than six months of observations we suggest the spectrum caching technique. According to our data, there are minimal temporal variations in TVWS spectrum, and that makes caching very appealing. We also sketch technical details for a possible spectrum caching solution.
  • Kurki, Joonas (2021)
    The goal of the thesis is to prove the Dold-Kan Correspondence, which is a theorem stating that the category of simplicial abelian groups sAb and the category of positively graded chain complexes Ch+ are equivalent. The thesis also goes through these concepts mentioned in the theorem, starting with categories and functors in the first section. In this section, the aim is to give enough information about category theory, so that the equivalence of categories can be understood. The second section uses these category theoretical concepts to define the simplex category, where the objects are ordered sets n = { 0 -> 1 -> ... -> n }, where n is a natural number, and the morphisms are order preserving maps between these sets. The idea is to define simplicial objects, which are contravariant functors from the simplex category to some other category. Here is also given the definition of coface and codegeneracy maps, which are special kind of morphisms in the simplex category. With these, the cosimplicial (and later simplicial) identities are defined. These identities are central in the calculations done later in the thesis. In fact, one can think of them as the basic tools for working with simplicial objects. In the third section, the thesis introduces chain complexes and chain maps, which together form the category of chain complexes. This lays the foundation for the fourth section, where the goal is to form three different chain complexes out of any given simplicial abelian group A. These chain complexes are the Moore complex A*, the chain complex generated by degeneracies DA* and the normalized chain complex NA*. The latter two of these are both subcomplexes of the Moore complex. In fact, it is later on shown that there exists an isomorphism An = NAn +DAn between the abelian groups forming these chain complexes. This connection between these chain complexes is an important one, and it is proved and used later on in the seventh section. At this point in the thesis, all the knowledge for understanding the Dold-Kan Correspondence has been presented. Thus begins the forming of the functors needed for the equivalence, which the theorem claims to exist. The functor from sAb to Ch+ maps a simplicial abelian group A to its normalized chain complex NA*, the definition of which was given earlier. This direction does not require that much additional work, since most of it was done in the sections dealing with chain complexes. However, defining the functor in the opposite direction does require some more thought. The idea is to map a chain complex K* to a simplicial abelian group, which is formed using direct sums and factorization. Forming it also requires the definition of another functor from a subcategory of the simplex category, where the objects are those of the simplex category but the morphisms are only the injections, to the category of abelian groups Ab. After these functors have been defined, the rest of the thesis is about showing that they truly do form an equivalence between the categories sAb and Ch+.
  • Gierens, Rosa (2015)
    A semi-automatic method for detecting the tops of the mixed layer in day time and the stable and residual layers in night time is presented. Automatic algorithms to detect gradients in the ceilometer data are utilized, in combination with a stability criteria, provided by an eddy covariance system as well as manual layer detection and quality control. The observations were carried out at Welgegund, a regional background site on the South African savannah. One year of observations was analysed, and the method is shown to work well considering existing knowledge of the continental boundary layer structure and previous observations in southern Africa. Despite having some limitations, the method provided notably high data coverage. The frequency at which each layer was detected showed an annual cycle being lowest in the summer and highest in the winter for all the three layers studied, combined with a diurnal cycle with day time providing lower coverage. A clear diurnal cycle of the boundary layer evolution was observed, however the average heights of the tops of different types of layers showed modest or non-existing annual variation. The day-to-day variation was profound. The strongest seasonal characteristic was present in the summer, when occasional deep convective layers were observed increasing the variability of the mixed layer top compared with other seasons. The effects of conditional sampling were tested by separating the observations in five data sets based on weather conditions and the applicability of the method, and various reasons with potential of causing bias in the results are discussed. The result underlines the need for representative observations of all conditions wished to be included in the study. Some examples of the implications of boundary layer structure on particle concentration are considered in explaining phenomena observed in particle number distribution measurements.
  • Lindström, Mats Johan Wilhelm (2020)
    Within the last century humanity has grown significantly more numerous and more globally connected than ever before in its history. Together with the increased risks of climate change, we are more susceptible than ever to major epidemics and pandemics caused by novel zoonotic diseases. For these reasons it is not only important understand under which conditions novel pathogens are able to invade and spread in a host population but also to understand how these pathogens can be eradicated following an invasion event. In this thesis we present and study the demographic and evolutionary dynamics of a compartmental epidemiological model that includes a compartment for asymptomatic individuals, who require a second infection to become symptomatic and infectious. We show that the model exhibits a wide variety of demographic dynamical behaviour, all of which can be evolutionarily attracting configurations under simple evolutionary considerations. The model is an extreme simplification of the real world and excludes relevant information such as age and spatial structures of the population at hand. The aim of this thesis is to obtain a general understanding of how varying certain parameters on one hand allows a pathogen to invade a host population and, on the other hand, allows the host to eradicate an established pathogen, in particular, through the process of evolution.