Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Soini, Suvi (2020)
    Pro gradu -tutkielmassani perehdyn jatkuviin jakaumiin kuuluvaan Pareto-jakaumaan. Pareto-jakauma on paksuhäntäinen todennäköisyysjakauma, joka on saanut nimensä keksijänsä italialaisen taloustieteilijä Vilfredo Pareton mukaan. Halusin ottaa tutkimukseni kohteeksi yhden harvemmin todennäköisyyslaskennan oppikirjoissa esiintyvän jatkuvan jakauman. Tutkielmassani johdan Pareto-jakauman tunnusluvut, esittelen sovelluksia sen varhaisesta historiasta aina nykypäivään saakka ja pohdin sen käyttömahdollisuuksia lukio-opetuksen moduuleissa. Pareto-jakaumaa ja sen tunnuslukuja johdettaessa tarvitaan useita käsitteitä, määritelmiä ja lauseita jatkuvien jakaumien yleisestä teoriasta. Tärkeimpiä tutkielmassani esiintyviä todennäköisyyslaskennan käsitteitä ovat satunnaismuuttuja, kertymäfunktio, tiheysfunktio, todennäköisyys, odotusarvo ja varianssi. Tutkielma alkaa johdanto-luvulla, jossa kerron lyhyesti tutkielmani sisällöstä. Johdattelu varsinaiseen aiheeseen jatkuu vielä kahdessa seuraavassa luvussa. Toisessa luvussa esittelen lyhyesti Vilfredo Pareton henkilöhistoriaa ja kolmannessa luvussa käyn läpi jatkuvien jakaumien perusteita. Esittelen tarvittavat määritelmät ja lauseet Pareto-jakauman tutkimista varten. Neljäs luku tutustuttaa lukijan Pareto-jakauman historiaan. Viidennessä luvussa esittelen Pareto-jakauman määritelmän, sen tunnusluvut tarvittavine lauseineen sekä niiden todistukset. Käyn läpi myös Pareto-jakauman eri tyyppejä. Kuudes luku tarjoaa ensin katsauksen Pareto-jakauman sovellusmahdollisuuksista eri tieteenaloilla. Pareto-jakaumaa käytetään edelleen keskeisesti tulojakaumana, joka on sen alkuperäinen käyttötarkoitus. Tähän paneudun kertomalla Suomen kotitalouksien varallisuuden mallintamisesta Pareto-jakauman avulla. Kuudennen luvun lopussa nostan esiin Pareto-jakauman käyttömahdollisuudet superleviävien taudinaiheuttajien, kuten koronaviruksen, tarttumisen mallintamisessa. Viimeisessä, seitsemännessä luvussa pohdin Pareto-jakauman käyttöä lukio-opetuksen eri moduuleissa vuoden 2021 elokuussa käyttöön otettavan uuden opetussuunnitelman mukaisesti ja olen työstänyt esimerkkitehtäviä opetukseen. Pareto-jakauma on erittäin käyttökelpoinen ja ajankohtainen jakauma, joka toimii monissa tilanteissa usealla eri tieteenalalla, erityisesti taloustieteessä. Sen soveltamisesta saa tehtyä hyviä tehtäviä matematiikan lukio-opetukseen oppiainerajat ylittäen, laaja-alaisen oppimisen tavoitteet täyttäen. Sovelluksia voisi tutkia vielä paljon lisää. Tämä pro gradu -tutkielmani on vain lyhyt katsaus Pareto-jakaumaan.
  • Pesonen, Saara (2019)
    Pareto-jakauma on jatkuva paksuhäntäinen todennäköisyysjakauma, joka sai alkunsa italialaisen taloustieteilijän Vilfredo Pareton havainnosta, jonka mukaan 20 % Italian väestöstä omisti 80 % kaikesta omaisuudesta. Havainnon yleistys tunnetaan nykyään Pareton periaatteena, joka on Pareto-jakauman erityistapaus. Tässä tutkielmassa esitellään Pareto-jakauman keskeisiä ominaisuuksia ja sovelluksia sekä pohditaan Pareto-jakauman mahdollisuuksia lukio-opetuksessa. Tutkielman alussa esitellään Vilfredo Pareton henkilöhistoriaa sekä tutkielmassa tarvittavaa jatkuvien jakaumien ja paksuhäntäisten jakaumien teoriaa. Neljäs luku käsittelee jakauman keskeisiä ominaisuuksia ja historiaa sekä Pareton periaatetta. Luvussa esitellään myös yleistetty Pareto-jakauma ja sen eri tyypit, Pareto-jakauman parametrien estimointimenetelmiä sekä yhteyksiä muihin jakaumiin. Luku Pareto-jakauman keskeisistä ominaisuuksista perustuu pääosin Norman L. Johnsonin ja Samuel Kotzin kirjaan Continuous univariate distributions [1-2]: Distributions in Statistics. Viidennessä luvussa esitellään Pareto-jakauman sovelluksia eri aloilla. Pääpaino on tulojakaumien tutkimuksessa, joka oli Pareton keskeinen tutkimusala. Tulojen ja varallisuuden jakautumisen mallintamisesta Pareto-jakauman avulla on esimerkkejä Japanista, Norjasta ja Romaniasta. Muita esimerkkejä sovelluksista ovat tutkimukset järvien pinta-alojen ja määrien arvioinnista sekä parametrien estimoinnista vakuutusalalla. Kuudennessa luvussa pohditaan Pareto-jakauman käyttöä lukio-opetuksessa marraskuussa 2019 julkaistun lukion opetussuunnitelman pohjalta. Pareto jakauman mahdollisuuksia opetuksessa tarkastellaan tilastomatematiikan, laaja alaisen osaamisen sekä ainerajat ylittävän oppimisen näkökulmista.
  • Kekäläinen, Jukka (2015)
    Tässä tutkielmassa tutustutaan kirjallisuuden avulla globaalin ohjelmistokehityksen parhaisiin käytäntöihin. Globaalissa ohjelmistokehityksessä ohjelmistokehittäjät työskentelevät eri maissa tai maanosissa. Globaalin ohjelmistokehityksen projektit ovat monimutkaisempia kuin paikallisesti toteutetut projektit. Muutamassa vuosikymmenessä on siirrytty Yhdysvaltoihin keskittyneestä ohjelmistotuotannosta nykyiseen tilanteeseen, jossa iso osa ohjelmistokehitysprojekteista on toteutettu globaalisti hyödyntäen erityisesti Aasian maiden alhaista palkkatasoa. Keskeisimpänä globaalin ohjelmistokehityksen hyötynä oletetaan olevan kustannussäästöt eri maiden välisistä palkkatasojen eroista johtuen. Oletettuihin kustannussäästöihin liittyy useita riskejä, joiden toteutumista on hankala arvioida ennalta. Eri aikavyöhykkeet, kielet ja maantieteelliset sijainnit asettavat monia haasteita globaalin ohjelmistokehityksen taloudellisille, teknisille, hallinnollisille ja kulttuurisille näkökulmille. Ongelmia voi olla hankala ennakoida ja ongelmien ilmaantuessa kesken projektin niiden heijastevaikutukset voivat rampauttaa työskentelyn. Valtaosaa alan tutkimuksesta kuvataan ongelmaraporteiksi. Parhaat käytännöt ovat neuvoja, joilla pyritään minimoimaan globaalin ohjelmistokehityksen ongelmia. Erityisesti parhaista käytännöistä nousee esille kommunikoinnin tärkeys ja sen mahdollistaminen työntekijöiden välille. Projektin globaali luonne tulisi ottaa huomioon jo organisaation rakennetta ja ohjelmistoarkkitehtuuria suunniteltaessa. Parhaisiin käytäntöihin panostaminen edellyttää tietotaitoa ja resursseja, joita ei välttämättä ole tarjolla. Globaali ohjelmistokehitys on laajasti levinnyt toimintamalli, jonka kautta saattaa syntyä kustannussäästöjä ja työskentelyn tehostumista. Alaa on tutkittu, ongelmia löydetty ja parhaita käytäntöjä ehdotettu, mutta käytännön projektit ovat kuitenkin osoittautuneet ongelmallisiksi, eikä standardeja onnistuneen globaalin ohjelmistoprojektin toteuttamiseksi ole vielä kehitetty.
  • Vesanen, Sampo (2020)
    Accessibility – what can be reached from a given point in space and how – is an essential field of study to measure the physical structure of cities, travel mode choices of residents, and the competitiveness of areas. Researchers increasingly acknowledge that accessibility is a fundamental concept on understanding how urban regions work and its position in future development of cities is paramount. Travel time is considered an intuitive measure to indicate accessibility and a strong predictor of mode choice, and usually, private car is the fastest mode of transport in urban environments. A central issue which stems from private cars and accessibility is the process of searching for parking. An understudied issue, the rather stressful activity is engaged in when arriving by car at the general area of desired parking, but no space is available. Motorists are then forced to continue search for parking, significantly contributing to urban congestion. In catering to mobility rather than accessibility, the modern urban planning has made it challenging to move away from private cars toward alternative, often more sustainable, modes of transport. Travel time studies, and more specifically, parking studies, can produce accurate data to aid in this transformation. In this thesis, a parking related research survey was developed and conducted in the Helsinki Capital Region, Finland. Adhering to the door-to-door approach, the survey respondents were enquired how long it took for them to find a parking place and park their car, and walk from the car to the destination in different postal code areas of Helsinki Capital Region. To explain a hypothetical variation in parking process durations (searching for parking, and walking to one's destination) in different areas, additional questions, such as the time of the day of parking, were presented. The invitation to respond to the survey was mostly spread on the social media platform Facebook. The survey, filled out with a web application specifically programmed for this thesis, received 5200 data rows from over 1000 unique visitors. The survey results indicate that there are spatial differences in parking process durations in different postal code areas of the Helsinki Capital Region. The inner city of Helsinki was experienced as the most difficult location to park in with regional subcenters such as Matinkylä, Espoo and Tikkurila, Vantaa, receiving relatively long parking process durations. Short parking process durations were reported from scarcely built areas but more often than not these areas had extreme values reported. Interestingly, area familiarity did not necessarily translate to faster parking process, while the type of the usual parking place was a better indicator. Out of the spatial explanatory variables added in the survey data processing, zones of urban structure (yhdyskuntarakenteen vyöhykkeet) could be used to find statistically significant differences in the parking process between variable groups and study area municipalities. Making use of the Helsinki Region Travel Time Matrix, a dataset developed by the research group Digital Geography Lab of the University of Helsinki, the thesis survey data was compared to total travel chain durations. The thesis survey data indicates that the proportion of time it takes to park one's car and walk to one's destination is a much larger part of the entire travel chain than previously estimated in the dataset. The parking process times are proportionally largest in the inner city of Helsinki, where the reported parking process duration exceeds that of the actual driving segment. This thesis, its entire version history, and all of the scripts developed for it have been made available at GitHub: https://github.com/sampoves/thesis-data-analysis.
  • Rekola, Iiris (2022)
    This thesis examines the Particle-into-liquid sampler (PILS), a collection device for water-soluble aerosol components. The literature review is divided into four sections. A components section describes working mechanism of the PILS and components used in a typical PILS setup. A performance section discusses the collection efficiency, time response and resolution, background, and various other metrics of the PILS. A section on analysis methods reports on the various analytical methods uses in combination with PILS, while a research application looks at the various ways the PILS has been used in aerosol research. The experimental part focuses on untargeted analysis of water-soluble aerosol content of indoor and outdoor, using PILS for sample collection and off-line gas chromatography-mass spectrometry (GC- MS) and two-dimensional gas chromatography-time-of-flight mass spectrometry (GCxGC-TOFMS) for analysis. To increase the collection efficiency, various sampling parameters were optimized, but with no major success. Tentative identification of detected compounds revealed mostly small organic compounds: oxygen compounds, benzenoids, organic acids, hydrocarbons, lipids and lipid-like molecules and organoheterocyclic compounds.
  • Grönqvist, Hanna (2012)
    In this thesis we consider extending the standard model of particle physics (SM) to include a fourth generation of elementary particles (SM4). The fourth generation would have to be sufficiently heavy to have escaped detection; specifically, its neutrino is required to be kinematically inaccessible to the Z boson in order to agree with the very precise LEP measurements of the Z width. This extension is appealing since the current theory (the SM) exhibits tension with some phenomena observed in nature. Such phenomena are, for example, the replication and number of fermion families, the ratio of matter to antimatter in the observable universe, charge-parity violation and the mixings between the fermions. Up to very recently the issue of the origin of mass, that is, the mechanism of electroweak symmetry breaking in the SM, was lacking experimental verification. However, during the writing of this thesis there have been some very exciting advances in this domain. In July 2012 the CMS and ATLAS collaborations at the LHC have reported the observation of a resonance at ∼ 125 GeV, an observation confirmed by experiments at another high-energy particle collider, the Tevatron. This resonance seems to correspond to the Higgs particle, the quantum of the scalar field responsible for the breaking of the electroweak symmetry. Besides verifying the answer to the theoretically fundamental question about the origin of mass, the experimental discovery also serves as a constraint for any theory of particle physics. The goodness of models describing particle physics are generally tested by performing global fits to the data, with the data set usually taken to be the most precisely measured quantities available — the electroweak precision observables. When the recent Higgs signal strengths are included in the data set, it is seen that the SM4 is not a correct theory of nature. Specifically, the Higgs signals predicted by the SM4 are not in agreement with the data, and the model has in September 2012 been quite decisively excluded at the statistical significance of 5.3σ. Following the developments in the field we next consider the phenomenological effects of adding another scalar doublet to the previously considered SM4. In the SM and SM4 there is just one scalar (Higgs) doublet: the models have a minimal scalar sector. The fermion sector, however, is not minimal: there are at least three replicas of a fermion family and so it is possible that the scalar sector is not minimal either. There are in fact arguments in favor of several Higgs doublets, for example supersymmetry and the baryon asymmetry of the universe. Two doublets give rise to five physical particles and so the phenomenology of such models is much richer than in the minimal scenario. Four family-models have received a great deal of interest in the last decade: some 500 articles are reported to have been published concerning their phenomenology during this time. This thesis is a review of the recent developments is this field.
  • Sulo, Juha (2018)
    Hiukkasten aktivointitodennäköisyys PSM:n sisällä riippuu laitteessa käytettävän kasvattavan fluidin supersaturaatiotasosta siten, että suuremmalla supersaturaatiotasolla saadaan aktivoitua enemmän ja pienempiä hiukkasia. Lisäksi meteorologiset suureet voivat vaikuttaa PSM:n toimintaan. Tämän tutkimuksen tarkoituksena oli selvittää Particle Size Magnifier-mittalaitteen (PSM) herkkyyttä sen asetuksille ja meteorologisille suureille sekä etsiä pienhiukkasmuodostukseen kontribuoivia syitä. Mittausten tarkkuuden kannalta on tärkeää ymmärtää millä PSM:n asetuksilla saadaan mitattua mahdollisimman tarkasti ilmakehän pienhiukkasia kokovälillä 1-3 nanometriä. Tutkimus suoritettiin vertaamalla PSM:n signaalia sekä taustaa eri meteorologisiin suureisiin, hiukkasmittauksiin sekä CI-APi-ToF:lla suoritettuihin höyrymittauksiin. Mittaukset suoritettiin vuosina 2014-2016 SMEAR II-asemalla Hyytiälässä. PSM:n asetuksia säädettiin vuosittain, joten pyrittiin määrittämään optimaaliset asetukset signaali-kohinasuhteen sekä signaalin voimakkuuden kannalta. PSM:n supersaturaation sekä signaalin amplitudin havaittiin laskevan vuosittain ja vastaavasti signaalikohinasuhde parani supersaturaationtason laskiessa. Vuonna 2014 PSM havaitsi selvän päiväsyklin sekä alle 2 nm että 2-3,5 nm hiukkasissa ja päiväsyklin havainnointi vaikeutui sitä seuraavina vuosina. Vuonna 2014 oli myös vahvimmat korrelaatiot CI-APi-ToF:n mittaamien höyryjen kanssa. Kemiallisista yhdisteistä parhaiten PSM:n signaalin kanssa korreloi rikkihappo, ja korrelaatio kaikkien höyryjen kanssa heikkeni supersaturaation heiketessä. PSM:n mittaamilla hiukkaspitoisuuksilla oli myös selkeä korrelaatio lämpötilan ja vesihöyryn pitoisuuden kanssa, mutta tämä saattaa johtua orgaanisten höyryjen vahvasta lämpötilariippuvuudesta. Tuloksista voidaan nähdä, että PSM:n voidaan pitää ainakin tasolla 20-30 hiukkasta per kuutiosenttimetri signaalikohinasuhdetta tai havaintoherkkyyttä merkittävästi vaarantamatta.
  • Kirjasuo, Anu (2021)
    Despite a vast body of knowledge that has already been accumulated on particle transport at both theoretical and experimental level, a simple method for estimating particle source impact on plasma density profile peaking has been lacking. Fable et al. presented a parameter for calculating the source strength (Sstr, the S parameter) in [1]. The parameter is derived from particle flux continuity equation, and after approximations takes as input parameters only the information on neutral beam injection (NBI) power, beam ions injection energy, effective core heat transport diffusivity and plasma density, radius, and volume together with a fitted coefficient from an ASDEX Upgrade experiment. The formula was applied to a database of 165 pulses in both high and low confinement mode, mostly with neutral beam heating, in JET, Joint European Torus, fusion experiment. The results appear reasonable considering the fitted parameter and the approximations in the formula. In addition to the S parameter values, also normalised density gradient dependence on neutral beam heating power and collisionality were investigated, to compare the results with those obtained at ASDEX Upgrade in [1]. Detailed studies of six gas puff modulation shots [2, 3, 4] at JET are used as reference. In [2] the source contribution for the H-mode shots was 50-60% and low confinement mode shots 10-20%. This is further validated in [3] and the high confinement mode shots are compared to similar shots DIII-D fusion experiment in [4], where the source impact on density peaking was negligible. Observed differences are attributed to different dominant turbulent environments. The average calculated level of S parameter values suggest mostly non-negligible source contribution to density peaking, and the values differ for high and low confinement mode plasmas, in line with [2, 3, 4]. However, the results imply that the coefficient 2000 is not constant across the database, and while a scalar correction to fit the coefficient to JET may be possible for low confinement mode plasmas, the high confinement mode plasmas require further research.
  • Penttilä, Antti (Helsingin yliopistoUniversity of HelsinkiHelsingfors universitet, 2002)
    Tutkielman tarkoituksena on muodostaa geometrinen muotomalli boorikarbidipartikkeleille (B4C), estimoida mallin parametrit partikkeleista otetuista kuvista, ja verrata mallin tuottamaa lineaarista polarisaatiota B4C-partikkelien mikrogravitaatiossa mitattuun polarisaatioon. B4C on yksi ranskalaisen PROGRA2-tutkimusryhmän mikrogravitaatiossa tutkimista partikkelityypeistä. Ryhmällä on käytössään polarisaation mittaukseen sopiva laitteisto parabolisiin lentoihin käytetyllä lentokoneella. Parabolisilla lennoilla koneen sisälle saadaan luotua lähes painottomat olosuhteet, jonka aikana polarisaatiomittaukset tehdään. Painovoima vaikuttaa partikkelien orientaatioon ja pakkaantumiseen, ja sitä kautta myös polarisaatioon. Tähtitieteessä mikrogravitaatiokohteita löytyy esimerkiksi tähtienvälisestä pölystä ja komeettojen pyrstöistä. Pienten partikkelien muotoa voidaan mallintaa muun muassa säännöllisillä muodoilla, vaikkapa ellipsoideilla tai sylintereillä, tai satunnaisesti deformoiduilla palloilla, kuten Gaussin palloilla. B4C-partikkelien muotomalliksi sopii kuitenkin paremmin satunnainen monitahokas. Tutkielmassa esitellään eräs sopiva malliproseduuri satunnaismonitahokkaiden luomiseen. Mallissa on kaksi parametria, jotka estimoidaan partikkeleista otetusta kuvamateriaalista. Kuvamateriaalissa näkyy partikkelien 2D-satunnaisprojektioita. Kukin partikkeli on kuvattu vain yhdestä suunnasta, joten kuvista on mahdoton johtaa suoraan partikkelien kolmiulotteista muotoa. Kun partikkelien oletetaan kuitenkin noudattavan samaa muotomallia, voidaan kolmiulotteista muotoa estimoida tilastollisessa mielessä. Mallin realisaatioista voidaan myös ottaa satunnaisprojektioita, ja mitata samoja suureita kuin oikeista partikkeleista. Nämä suureet ovat satunnaismuuttujia, mutta muuttujien analyyttisen jakauman johtaminen on hyvin vaikea tehtävä. Näin ollen mallin estimointiin ei voida käyttää suurimman uskottavuuden menetelmää. Malliproseduurin avulla saadaan kuitenkin simuloitua havaintoja tästä tuntemattomasta jakaumasta. Näistä havainnoista muodostettu ydinestimaatti estimoi tuntematonta jakaumaa tietyllä parametrivektorin arvolla. Simuloidussa suurimman uskottavuuden menetelmässä uskottavuuspäättely tehdään näiden estimaattien pohjalta. Tutkielmassa saadaan näin estimoitua parametrien arvot B4C-partikkelien muotomallille. Säteenseurantakoodia käyttäen saadaan satunnaismonitahokasmallin partikkelien tuottama lineaarinen polarisaatio laskettua. Polarisaatioon vaikuttaa kuitenkin partikkelien muodon ja koon lisäksi niiden kompleksinen refraktioindeksi, mutta B4C-partikkeleiden refraktioindeksiä ei vielä tunneta. Tutkielmassa muodostetaan estimaatti tälle refraktioindeksille vertaamalla mallin ja aitojen partikkelien polarisaatiokäyrien eroja refraktioindeksin reaali- ja imaginaariosien funktiona pienimmän neliösumman mielessä. Valonsirontatutkimuksessa halutaan usein arvioida sirottavan aineen ominaisuuksia sen valonsironnan perusteella. Kun ominaisuuksiin vaikuttaa kappaleen muoto, koko ja aineen refraktioindeksi, on inversion onnistumisen kannalta erittäin tärkeää, että kappaleen muotomalli on realistinen ja hyvin estimoitu. Tutkielmassa esiteltyä simuloidun uskottavuuden menetelmää voidaan käyttää erilaisten muotomallien estimointiin. Lisäksi menetelmää voidaan käyttää myös muissa estimointiongelmissa sovellusalasta riippumatta.
  • Siiskonen, Ville (2016)
    Kaaottisessa järjestelmässä, kuten esimerkiksi ilmakehässä, ennustettavuus on rajoitettu. Yksittäinen numeerinen sääennuste tulee aina lopulta erkanemaan todellisesta ilmakehän tilasta. Numeeristen sääennusteiden parantamiseksi on kehitetty parviennusteet, missä yhden ennusteen sijasta tarkastellaan useamman ennusteen käyttäytymistä. Luotettavan parviennusteen luomiseksi on parven jäsenten alkutilat valittava tarkkaan. Parvi voidaan luoda niin, että arvioituun alkutilaan lisätään satunnaisia häiriöitä tai edeltävän parviennusteen pohjalta jalostettuja häiriöitä. Tässä pro gradu -työssä tarkasteltiin, onko satunnaishäiriö- ja jalostushäiriömenetelmän luotettavuudessa eroa Lorenz-96-mallissa. Luotettavuutta vertailtiin kolmella eri todentamismenetelmällä: sijoitushistogrammilla, hajaantuminen-osuvuus-tuloksella ja sijoitustodennnäköisyystuloksella. Parviennusteet luotiin sekä täydelliselle että stokastiselle mallille. Häiriöiden kokoa vaihdeltiin neljän eri skaalausparametrin avulla. Ennen varsinaisten parviennusteiden luontia mallin kaaottisuus varmistettiin selvittämällä ensimmäinen Lyapunovin eksponentti sekä tarkastelemalla, kääntyvätkö satunnaiset häiriöt Lyapunov-vektorin suuntaan. Tulosten mukaan häiriömenetelmien välillä oli eroa paikoin, riippuen käytetystä mallista ja skaalausparametristä. Täydellisessä mallissa jalostushäiriömenetelmä oli parviennusteen ajan hetkestä ja häiriön koosta riippuen luotettavampi kuin satunnaishäiriömenetelmä sijoitushistogrammin ja hajaantuminen-osuvuus-tuloksen osalta. Stokastisessa mallissa satunnaishäiriömenetelmä oli luotettavampi ainoastaan sijoitushistogrammin tulosten perusteella. Sijoitustodennnäköisyystulos ei antanut selviä viitteitä suuntaan tai toiseen kummankaan mallin osalta.
  • Grönberg, Iiro (2014)
    To prevent the ongoing climate change we need to shift our energy use from fossil fuels to renewable energy. The fast evolution of energy technology has opened up the possibility to make small-size renewable energy investments also in private households. Concept of energy citizenship is strongly associated with energy microgeneration. Energy citizenship is a concept, where sustainable energy consumption and increased awareness can combine with decentralized energy systems. Micro-level energy investments can make people more attached to renewable energy both economically and psychologically. Smart-metering helps to keep track of the household's energy production and consumption. Understanding about energy will rise when the contact to energy is habitual. This might have positive sociopsychological effects in people, leading to changes also in patterns of consumption. In the beginning of year 2013 a community energy project started in South Karelia. Altogether 21 households ordered solar photovoltaic panels from Germany. The project was non-commercial and independent. The aim of this study is to find out if the energy investment has launched any changes related to energy citizenship. The study also analyses the community energy project as a process and the actions of a local energy company as part of the project. The national energy policy will also be briefly discussed. The method of this study is a semi-structured interview, which was used to map out the views of project coordinator, energy company and micro producers. The material of twelve interviews is analyzed primarily with qualitative content analysis. The material gives clear signals that solar energy producers have been affected by sparks of energy citizenship. Many of the micro producers are actively monitoring their households' energy production and consumption which has led to upgraded level of knowledge. People are also scheduling some of the energy consumption according to when the solar panels produce energy. The most important result of this study is that investing in energy production can potentially lead to further positive changes in households' energy use. Based on the results of the study, communityled decentralized renewable energy projects can have a positive impact in both climate change and energy attitudes. Making an own investment and building solar panels with one's own hands makes people more attached to energy which is a fertile soil for sprouts of energy citizenship. Solar panels are not the final answer to climate change, but decentralized energy production can be part of the solution – especially because it has effects also on producer-consumers. Dichotomy between centralized and decentralized energy production is useless because both are needed. Instead of helping the progress of certain forms and scales of renewable energy, we could build supportive conditions for renewable energy in general – solar energy included.
  • Uotila, Touko (2024)
    Optical frequency combs are broadband laser light sources that produce light consisting of equally separated narrow lines. The frequencies of these comb lines can be determined and stabilized accurately. A dual-comb spectrometer is based on two optical frequency combs, and it can measure spectra at high speed and with high spectral resolution and accuracy. This requires high mutual coherence between the two combs. In this thesis, a passively coherent dual-comb spectrometer was optimized for spectroscopic temperature measurements. Spectroscopic temperature measurements presented in this thesis are based on quantifying the temperature dependence of the molecular absorption lines. In the experimental part of this thesis, three different spectroscopic temperature measurement techniques were used to measure the temperature of acetylene gas and results were compared to the temperature value of a reference temperature sensor. The three methods that are demonstrated in this thesis are line-strength ratio thermometry (LRT), rotational-states distribution thermometry (RDT), and Doppler-broadening thermometry. The measured dual-comb spectra had high quality and the dual-comb figure of merit was determined to be 5.7×10^6 Hz^(-1/2), which is a typical value for a high-quality dual-comb spectrometer. All the temperature measurements were performed at room temperature (295 K). Line-strength ratio thermometry produced the most accurate temperature results, with an estimated uncertainty of approximately 1 K. Rotational-states distribution thermometry results had an estimated uncertainty of about 3 K. Doppler-broadening thermometry did not produce reliable results, most likely due to too high gas pressure. The possible future work should be performed with larger temperature and pressure ranges to assess the accuracy of the presented spectroscopic thermometry techniques more thoroughly.
  • Pruccoli, Andrea (2018)
    Dumping sites of chemical warfare agents related compounds have been created after the two last major wars. The assessment of the risks connected to these sites is a priority as they could threaten the human health directly, through incidents when the dangerous materials come directly into contact with a subject and indirectly, as the poisonous substances can affect the environment and enter the food chain. Many techniques have been involved in the monitoring, for example sediment analysis and mussels bio-monitoring. To overcome the deficiencies of these techniques and to obtain a more complete overview of the situations, new ways of analysis are studied. In this thesis the passive sampler technique was studied as new method to monitor the chemical warfare agents dumping sites. This technique has been often used in the environmental monitoring of air and water samples. In the specific this work focused its attention in the use of silicone sheets as passive samplers, investigating their effectiveness with the substances of interest: sulfur mustard derivatives, arsine related chemical warfare agents derivatives and α-chloroacetophenone. Furthermore, the extraction power of different solvents was tested and a theoretical study of the opposing phenomena that compete in the extraction process was carried out. Finally, the theoretical uptake model was tested on the different substances verifying its validity and showing how the efficacy of the passive sampling technique depends on various factors like the sampler-water partition coefficient, the relative recovery from the sampler and the stability of the compound of interest. The recovery studies have shown how acetone is the best solvent with a wide variety of compounds, but its extraction power can be improved towards less polar compounds using a solution of acetone/ethyl acetate 9:1. The effectiveness of silicone sheets as passive samplers was demonstrated by the kinetic studies. Stable compounds with a high octanol-water partition coefficient (≥ 3) present the best results showing good agreement with the theoretical model. The next step will be testing the silicone sheets near known dumpsites using performance reference compounds as in situ calibration.
  • Juvonen, Markus (2017)
    This thesis strives to familiarize the ideas behind the success of patch-based image representations in image processing applications in recent years. Furthermore we show how to restore images using the idea of patch-based dictionary learning and the k-means clustering algorithm. In chapter 1 we introduce the notion of patch-based image processing and take a look at why dictionary learning using sparsity is a hot topic and useful in processing natural images. The second chapter aims to formulate the different methods and approaches used in this thesis mathematically. Dictionary learning, the k-means algorithm and the Structural similarity index (SSIM) are in the main focus. Chapter 3 goes into the details of the experiments. We present and discuss the results as well. The fourth and final chapter summarizes the main ideas of the thesis and introduces development suggestions for further investigation based on the methods used. Using a fairly simplistic patch-based image processing method we manage to reconstruct images from a set of similar images to a reasonable extent. As the main result we see how the size of the patches as well as the size of the learned dictionary effects the quality of the restored image. We also detect the limitations and problems of this approach such as the appearance of patch artifacts which is an issue to attack and resolve in following studies.
  • Jalkanen, Pinja-Liina Jannika (2020)
    Large-scale transport infrastructure projects change our daily mobility patterns, as they change the geographical accessibility of the places where we spend most of our time, such as our homes and workplaces. Thus, there is a clear need for advance evaluation of the effects of those projects. Traditionally, however, the available methods have imposed severe limitations for both measuring accessibility and surveying mobility, and despite modern data collection methods enabled by the ever-present mobile phones, surveying mobility remains challenging due to data accessibility restrictions. Furthermore it would not enable any advance evaluation of mobility changes. However, using a modern accessibility dataset instead of a mobility one does offer a possible answer. In my study, I set out to investigate this possibility. I combined a modern, multimodal and longitudinal accessibility dataset, the Helsinki Region Travel Time Matrix (TTM), with a spatially compatible, census-based longitudinal commuting dataset to evaluate the aggregated journey times in the Helsinki Capital Region (HCR), the area covered by the TTM, and estimated the shares of different transport modes based on a previously published travel survey. Armed with this combined dataset, I assessed the changes in aggregated journey times between the three years that were included in the TTM dataset – 2013, 2015 and 2018 – by statistical district to estimate its usability for these kind of advance mobility evaluations. As a small subset of the commuting dataset was classified by industry, I also assessed regional differences between industries. My results demonstrate that for travel by public transport, the effects of new transport projects are plausibly identifiable in these aggregated patterns, with a number of areas served by several new, large-scale public transport infrastructure projects – the Ring Rail, the trunk bus lane 560 and the Western extension of the metro line – being outliers in the results. For travel by private car and for the industry-level changes, the results are more inconclusive, possibly due to absence of massive projects affecting the road network throughout the dataset timeframe, potential inaccuracies in the source data and limitations of the industry-classified part of the dataset. In conclusion, a modern accessibility dataset such as the TTM can be plausibly used to estimate the mobility effects of large-scale public transport infrastructure projects, although the final accuracy of the results is likely to be heavily dependent of the precision of the original datasets, which should be taken into account when such assessments are made. Further research is clearly needed to assess the effects of diurnal variations in travel times and the effects of more precise transport mode preference data.
  • Nyman, Valtteri (2022)
    Tässä tutkielmassa esitellään lyhyesti PCP-teoreema, minkä jälkeen tutkielmassa pala palalta käydään läpi teoreeman todistamiseen tarvittavia työkaluja. Tutkielman lopussa todistetaan PCP-teoreema. Vaativuusluokka PCP[O(log n), O(1)] sisältää ne ongelmat, joilla on olemassa todistus, josta vakio määrän bittejä lukien probabilistinen Turingin kone kykenee ratkaisemaan ongelman käyttäen samalla vain logaritmisen määrän satunnaisuutta suhteessa syötteen kokoon. PCP-teoreema väittää vaativuusluokan NP kuuluvan vaativuusluokkaan PCP[O(log n), O(1)]. Väritys on funktio, joka yhdistää kuhunkin joukon muuttujaan jonkin symbolin. Rajoite joillekin muuttujille on lista symboleista, joita rajoite sallii asetettavan muuttujille. Jos väritys asettaa muuttujille vain rajoitteen sallimia symboleja, rajoite on tyytyväinen väritykseen. Optimointi-ongelmat koskevat sellaisten väritysten etsimistä, että mahdollisimman moni rajoite joukosta rajoitteita on tyytyväinen väritykseen. PCP-teoreemalla on yhteys optimointi-ongelmiin, ja tätä yhteyttä hyödyntäen tutkielmassa todistetaan PCP-teoreema. Tutkielma seuraa I. Dinurin vastaavaa todistusta vuoden 2007 artikkelista The PCP Theorem by Gap Amplification. Rajoiteverkko on verkko, jonka kuhunkin kaareen liittyy jokin rajoite. Rajoiteverkkoon liittyy lisäksi aakkosto, joka sisältää ne symbolit, joita voi esiintyä verkon rajoitteissa ja värityksissä. Tutkielman päälauseen avulla kyetään kasvattamaan rajoiteverkossa olevien värityksiin tyytymättömien rajoitteiden suhteellista osuutta. Päälause takaa, että verkon koko säilyy samassa kokoluokassa, ja että verkon aakkoston koko ei muutu. Lisäksi jos verkon kaikki rajoitteet ovat tyytyväisiä johonkin väritykseen, päälauseen tuottaman verkon kaikki rajoitteet ovat edelleen tyytyväisiä johonkin väritykseen. Päälause koostetaan kolmessa vaiheessa, joita kutakin vastaa tutkielmassa yksi osio. Näistä ensimmäisessä, tutkielman osiossa 4, verkon rakenteesta muovataan sovelias seuraavia osioita varten. Toisessa vaiheessa, jota vastaa osio 6, verkon kävelyitä hyödyntäen kasvatetaan tyytymättömien rajoitteiden lukumäärää, mutta samalla verkon aakkosto kasvaa. Kolmannessa vaiheessa, osiossa 5, aakkoston koko saadaan pudotettua kolmeen sopivan algoritmin avulla. Osiossa 7 kootaan päälause ja todistetaan lausetta toistaen PCP-teoreema.
  • Lintusaari, Jarno (2014)
    This thesis proposes a generalization for the model class of labeled directed acyclic graphs (LDAGs) introduced in Pensar et al. (2013), which themselves are a generalization of ordinary Bayesian networks. LDAGs allow encoding of a more refined dependency structure compared to Bayesian networks with a single DAG augmented with labels. The labels correspond to context-specific independencies (CSIs) which must be present in every parameterization of an LDAG. The generalization of LDAGs developed in this thesis allows placement of partial context-specific independencies (PCSIs) into labels of an LDAG model, further increasing the space of encodable dependency structures. PCSIs themselves allow a set of random variables to be independent of another when restricted to a subset of their outcome space. The generalized model class is named PCSI-labeled directed acyclic graph (PLDAG). Several properties of PLDAGs are studied, including PCSI-equivalence of two distinct models, which corresponds to Markov-equivalence of ordinary DAGs. The efficient structure learning algorithm introduced for LDAGs is extended to learn PLDAG models. This algorithm uses a non-reversible Markov chain Monte Carlo (MCMC) method for ordinary DAG structure learning combined with a greedy hill climbing approach. The performance of PLDAG learning is compared against LDAG and traditional DAG learning using three different measures: Kullback-Leibler divergence, number of free parameters in the model and the correctness of the learned DAG structure. The results show that PLDAGs further decreased the number of free parameters needed in the learned model compared to LDAGs yet maintaining the same level of performance with respect to Kullback-Leibler divergence. Also PLDAG and LDAG structure learning algorithms were able to learn the correct DAG structure with less data in traditional DAG structure learning task compared to the base MCMC algorithm.
  • Alanko, Joonas (2018)
    Northern peatlands are a valuable, volatile carbon stock and hold 30 % global terrestrial organic carbon. Climate is the most important control on peatland ecology. Positive and negative feedback loops between peatlands and climate make this complex relationship. Climate change affects high northern latitudes in particular, making the northern peatlands prone to experience massive changes in ecology. It is unclear exactly how long-term climate change will affect such an integral part of the carbon cycle. My intent was to produce a reliable study of what kind of response did bogs develop after the Little Ice Age (LIA) and how climate warming has affected these important carbon stocks. This study uses high frequency, multi-proxy, post-LIA peatland response data to map the ecological response of a boreal ombrotrophic peatland ecosystems. Study sites are located in Southern-Finland and Estonia. I concentrate on three distinct micro-habitats. I use low-frequency data from previous studies to compare with my novel data and to give the study more spatial scope. My analysis is structured around a chronology consisting of 210 Pb-dates. Plant macrofossil data, present vegetation data, Sphagnum mosses as the most important group, and modern water-table data are used to model past plant composition and hydrology. Age-depth models and water-table reconstructions have been created on this basis. Bulk density and C/N ratio were also analysed. Large and fast paced shifts in accumulation rates and changes in vegetation composition were revealed. After LIA, peatland surfaces have dried and dry-habitat vegetation has increased. I identified a two-step pattern in response to post-LIA climate shifts. A wet period ended LIA, followed by a two-step warming identified from different proxies and models. The pattern of change coincides with the known changes in climate. This suggests that after the LIA, changes in climate have been the driving force behind changes in peatland ecology and carbon sequestration in it. The results show that the way northern bogs respond to changes in climate can on the short term have huge effects for the vegetation, and on long term threaten the whole peatland and its carbon stocks. These changes are manifested through changes in the relation of primary production to decomposition and local hydrology. Different microhabitats are more vulnerable to climate shifts than others. In future climate warming will continue to influence northern peatlands. Depending on the scale of change, peatlands can act as a sink of atmospheric carbon, or if a watershed is reached, release large amounts of carbon to the atmosphere. Most likely this would not only destroy peatlands in large quantities, but also further enhance the positive feedback between carbon release and peatland drying.
  • Junna, Tuomas (2020)
    Pedogenic ferromanganese nodules and concretions are prevalent redoximorphic features in tropical and sub-tropical soils. The nodules are typically highly enriched in Fe and Mn that are present as oxides, hydroxides and oxyhydroxides. The formation of nodules happens via precipitation and translocation of metals as the soil redox state undergoes cyclical changes between reductive and oxidizing settings. As the nodule elemental distribution and structure is primarily and expression of the prevailing soil redox conditions, Fe-Mn nodules have the potential to be a useful tool of paleoclimatological analysis. The Chinese Loess Plateau (CLP) is a terrestrial archive for study of changes in the monsoon climate system. During Late Miocene, the intensification of the Asian Monsoon system caused an increase in warmth and humidity in inland Eastern Asia during a global trend of increased aridity and decreasing temperatures. Fe-Mn nodules from three different soil horizons, formed 8.07, 7.7 and 3.7 Ma ago in Lantian, southern CLP, were studied to compare nodules from varying sedimentary settings formed under different moisture regimes. Using electron microscopy methods, the structure and elemental distribution of nodules were described to compare their redoximorphic features. Large Fe-Mn nodules from floodplain sediments (8.07 Ma) show a well-developed structure, high metal enrichment and signs of variations in rate of formation and dominant redox states. The soil redox conditions are likely primarily controlled by the river flooding. Nodules from two eolian deposits (7.7 Ma and 3.7 Ma) were, on average smaller and showed less metal enrichment, less elemental differentiation and less variance in the dominant redox conditions. Only small, poorly developed nodules were found from older eolian sediments whereas younger soil horizon contained larger nodules with evidence of higher hydromorphism. While potential for using the nodules from eolian sediments to assess changes in precipitation exists, the lack of paleoclimatological information in smaller nodules, the small sample count, limitations of the methods and variance in depositional settings increase the uncertainty of the interpretation.
  • Lepistö, Tiina (2014)
    The soft soil sediments in Southern Finland were originally deposited during different stages of the Baltic Sea. There are many problems concerning construction on these soft soils. These include, among others, sinking and stability problems. Demands for construction work on soft soil areas are growing all the time and it is therefore important to study the geological and geophysical properties of the soil. This thesis examines the possibility of drawing conclusions on soft soil stratigraphy with the help of in situ measurements. In situ measurements are performed on-site and they speed up working because separate sampling and laboratory analysis are not required. Measurements were carried out in the Helsinki Metropolitan Area. Sediment depositions' apparent resistivity, temperature and magnetic susceptibility were measured with probes installed on the tip of bog drills. The results were compared to an existing stratigraphic data and other information. Research also included method testing with susceptibility measuring instrument purchased by Geological Survey of Finland. In situ measurements of apparent resistivity provides reliable information on soft soil stratigraphy. Resistivity profiles can be used to distinguish topsoil and different clays. In addition, sandy intermediate layers are well distinguished in these high resolution measurements. The apparent resistivity varies similarly in every research area. In situ measurements of magnetic susceptibility can be used to estimate the incidence of sulphide clays. In this work no conclusions could be drawn regarding the variability of susceptibility due to the weaknesses of the apparatus. Use of the apparatus revealed several problems. The most notable is the lack of a temperature sensor.