Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Dubey, Anshuman (2017)
    Conformal blocks are building blocks of correlation functions in conformal field theories (CFTs). They neatly encode the universal information dictated by conformal symmetry and separate it from the dynamical information which depends on the particular theory. Conformal blocks merit an in-depth study as is evidenced by their extensive applications in the study of bulk locality in the AdS/CFT correspondence and the recent conformal bootstrap program. The vacuum Virasoro blocks in the semi-classical (large central charge) limit is known to compute the leading order contribution to the Rényi entropy. Moreover, the semi-classical Virasoro blocks along with conformal bootstrap feature in a proof of the cluster decomposition principle for AdS3/CFT2. In this thesis, conformal field theory and its necessary ingredients are briefly reviewed. Conformal blocks from the exchange of a spinless operator are evaluated by holographic computations of geodesic Witten diagrams for AdSd+1/CFTd. The results are verified against the Casimir operator method of Dolan and Osborn. Virasoro blocks in various semi-classical limits are discussed, and holographic Virasoro blocks are calculated in the global, heavy-light, and perturbative heavy, and the results are verified using the monodromy method. Finally, defect conformal field theories (dCFTs) are introduced and, as an original contribution, an integral expression for defect conformal blocks is obtained, which is expected to precisely match the corresponding result in dCFT literature.
  • Bagalá, Nicola (2017)
    Given two groups, there are several ways of obtaining news ones. This work focuses on three of these ways: the direct, semidirect, and wreath products. These three products can be thought of as subsequently 'building upon' each other, since the definition of semidirect product depends on the concept of direct product, and wreath products are essentially a particular example of semidirect product. The concepts above were explored both theoretically and practically, by means of several different examples as well as some digressions from the main topics for the benefit of interested readers. The most substantial and convoluted examples of semidirect and wreath products were given in the last section, where the algebraic structures of Rubik's group and of the illegal Rubik's group are introduced. These are the groups of, respectively, all legal and possible (legal or illegal) moves one can perform on Rubik's cube. An illegal move is such that it cannot be performed without taking the cube apart and reassembling it differently. Rubik's group is generated by all legal basic moves that can be performed on Rubik's cube - for example, twisting a face of the cube left or right. This extremely large-sized group contains two particular subgroups, namely the subgroups of orientation-preserving and position-preserving moves. The first is such that any of the moves in it, if applied to the cube, will leave the orientation of all the cube's 'cubies' unchanged, with respect to a labelling system priorly established on the cube itself, though they may change the position of the cubies. Similarly, the elements of the subgroup of position-preserving moves will not change the position of the cubies, but they may change their orientation. The main result proved in this work is that the legal Rubik's group is the semidirect product of the orientation-preserving and position-preserving subgroups. The method used is mainly based on, and it expands upon, that used by Charles Bandelow in his book Inside Rubik's cube and beyond. A second fact - that the illegal Rubik's group is isomorphic to a direct product of wreath products - was also proved as a secondary goal.
  • Kilponen, Simo (2015)
    Tutkielmassa tarkastellaan semi-Markov-prosessin hyödyntämistä keskeyttämistapahtuman mallintamiseksi vapaaehtoisessa henkivakuutuksessa. Vakuutetun riskin kehitys mallinnetaan monitilaisena semi-Markov-prosessina ja keskeyttämisintensiteetti määritellään tähän pohjautuvan vakuutusmaksun ja klassisen henkivakuutusteorian mukaisen vakuutusmaksun funktiona. Keskeyttämisintensiteetti asetetaan riippumaan lisäksi lineaarisesti keskeyttämisherkkyydeksi tulkitusta vakiosta. Numeerisena esimerkkinä simuloidaan vakuutuskannan kehitystä eri keskeyttämisherkkyyksin ja tarkastellaan aktiivisen kannan koostumusta simulointijakson jälkeen.
  • Rajani, Chang (2018)
    Fouling is a large scale problem in industrial equipment such as heat exchangers or pipes, used in factories, ships, airplanes, etc. Traditionally, such equipment is cleaned using sandblasting, chemicals or mechanical methods, all of which require halting the process, which is costly. Recently, high-power ultrasound has become a viable option to these methods. In ultrasonic cleaning ultrasound is projected into the equipment from the outside, which means that the equipment does not need to be halted to perform cleaning. While the cleaning itself is not invasive in nature, in most cases vision cannot be used to determine whether cleaning is actually necessary or not. What remains is to have such a method that is also non-invasive. It is possible to use ultrasound as a kind of a radar to detect whether or not fouling is present, and this has been attempted in previous literature. However, until now, such methods have required extensive manual calculation and knowledge of the physical properties of the setup. We present the first ever system to concurrently clean and detect industrial fouling using ultrasound and deep learning. Our method does not rely on specific properties of the equipment, allowing it to generalize to large industrial processes where it is not practical to calculate or simulate the cleaning scenario. To this end, we extend existing literature on semi-supervised learning by presenting algorithms used to learn from a monotonic process, and model the high-dimensional signal data using a convolutional neural network that is highly robust to temporal variance. This thesis presents the machine learning solution behind the system, and the cleaning components are provided by Altum Technologies. Further, we explore methods to detect and counter the so-called domain shift that occurs when experimenting in the physical world, and provide experimental evidence that our methods work in practice.
  • Pulkkinen, Teemu (Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2010)
    In this thesis a manifold learning method is applied to the problem of WLAN positioning and automatic radio map creation. Due to the nature of WLAN signal strength measurements, a signal map created from raw measurements results in non-linear distance relations between measurement points. These signal strength vectors reside in a high-dimensioned coordinate system. With the help of the so called Isomap-algorithm the dimensionality of this map can be reduced, and thus more easily processed. By embedding position-labeled strategic key points, we can automatically adjust the mapping to match the surveyed environment. The environment is thus learned in a semi-supervised way; gathering training points and embedding them in a two-dimensional manifold gives us a rough mapping of the measured environment. After a calibration phase, where the labeled key points in the training data are used to associate coordinates in the manifold representation with geographical locations, we can perform positioning using the adjusted map. This can be achieved through a traditional supervised learning process, which in our case is a simple nearest neighbors matching of a sampled signal strength vector. We deployed this system in two locations in the Kumpula campus in Helsinki, Finland. Results indicate that positioning based on the learned radio map can achieve good accuracy, especially in hallways or other areas in the environment where the WLAN signal is constrained by obstacles such as walls.
  • Jeskanen, Juuso-Markus (2021)
    Developing reliable, regulatory compliant and customer-oriented credit risk models requires thorough knowledge of credit risk phenomenon. Tight collaboration between stakeholders is necessary and hence models need to be transparent, interpretable and explainable as well as accurate, for experts without statistical background. In the context of credit risk, one can speak of explainable artificial intelligence (XAI). Hence, practice and market standards are also underlined in this study. So far, credit risk research has mainly focused on the estimation of the probability of default parameter. However, as systems and processes have evolved to comply with regulation in the last decade, recovery data has improved, which has raised loss given default (LGD) up to the heart of credit risk. In the context of LGD, most of the studies have emphasized estimation of one-stage models. However, in practice, market standards support a multi-stage approach which follows the institution's simplified recovery processes. Generally, multi-stage models are more transparent and have better predictive power and compliant status with the regulation. This thesis presents a framework to analyze and execute sensitivity analysis for multi-stage LGD model. The main contribution of the study is to increase the knowledge of LGD modelling by giving insights to the sensitivity of discriminatory power between risk drivers, model components and LGD score. The study aims to answer two questions. Firstly, how sensitive the predictive power of multi-stage LGD model is on the correlation of risk drivers and individual components? Secondly, how to identify the most driving risk factors that need to be considered in multi-stage LGD modelling to achieve adequate level LGD score? The experimental part of this thesis is divided into two parts. The first one presents the motivation, study design and experimental setup used in this thesis to execute the study. The second part focuses on the sensitivity analysis of risk drivers, components and LGD score. Sensitivity analysis presented in this study gives important knowledge of behavior of multi-stage LGD and dependencies between independent risk drivers, components and LGD score with regards to the correlations and model performance metrics. Introduced sensitivity framework can be utilised in assessing the need and schedule for model calibrations with related to the changes in application portfolio. In addition, framework and results can be used in recognizing the needs for monthly performed IFRS 9 ECL calculation updates. The study also gives input for model stress testing where different scenarios and impacts are analyzed regarding the changes in macroeconomic conditions. Even though the focus of this study is in credit risk, the methods presented are also applicable in the different fields outside the financial sector.
  • Lassila, Juuso (2024)
    Calculating sentence similarities is an essential task for natural language processing. It allows for implementing similarity searches, where the most similar sentence is found out of many for some query sentence, it allows for clustering text by semantic meaning, and finally, sentence embeddings, which are used for calculating the similarities, can also be used as input for any text classification models. There is much room for improvement in sentence embedding model architectures and training methods, both in terms of accuracy and training efficiency. This thesis experiments with a novel unsupervised training method called Sentence Embeddings via Token Inference (SETI), which is efficient by design, to see if it can compete with other methods in accuracy. Using the same data, our experiments train SETI and three other existing training methods: TSDAE, QuickThoughts, and generic MLM. We then compare these models to each other in different sentence similarity and downstream classification tasks. Based on our experiments, SETI is comparable to TSDAE in sentence similarity tasks and better than generic MLM and QuickThoughts training methods in sentence similarity tasks. However, TSDAE has the highest accuracy for downstream classification tasks, while SETI still beats the generic MLM and QuickThoughts models.
  • Nikkari, Eeva (2017)
    The sentence segmentation task is the task of segmenting a text corpus into sentences. Segmenting well structured and fully punctuated data into sentences is not a very difficult problem. However, when the data is poorly structured or missing punctuation the task is more difficult. This thesis will look into this problem by using probabilistic language modeling, with special emphasis on the n-gram model. We will present theory related to language models and evaluating them, as well as empirical results achieved on documents provided by AlphaSense Oy and a freely available Reuters-21578 corpus. The experiments on n-gram models focused on the following questions. How does the smoothing and order of the n-gram affect the model? How well does a model trained on one type of data adapt to another type of text? How does retaining more or less symbols and punctuation affect the performance? And how much is enough training data for the model? The n-gram models performed rather well on the same type of data they were trained on. However, the performance was significantly worse when moving to another document type. In absence of punctuation the performance of the model was also rather poor. The conclusion is that the n-gram model seems inadequate in recovering the sentence boundaries in difficult settings such as separating the unpuncutated title from the body of the text.
  • Kortesalmi, Ville (2024)
    Improving employee well-being is a key part of pension agency Keva’s mission statement. Recently, Keva has launched a tool for conducting repeated small-scale employee well-being surveys called ”Pulssi”. With the number of responses reaching thousands Keva has identified processing and organizing this data as a part of this process that could be improved using machine learning methods. In this thesis, we conducted a comprehensive investigation into using language models and sentiment classifications as a solution. We tested three different methodologies for this purpose, traditional machine learning with learned embeddings, generative language methods, and fine-tuned BERT models. To our knowledge, this is the first study evaluating the use of language models on the Finnish sentiment analysis task. Additionally, we evaluated the feasibility of implementing these methods based on their operating costs and the time it took to create classifications. We found that the traditional machine learning trained on learned embeddings performed surprisingly well, achieving an accuracy of 91%. These models offer a fast and cost-effective alternative to the more cumbersome language models. Our fine-tuned BERT model the ”KevaBERT” achieved an impressive accuracy of 93.6%, when trained on GPT-4 generated predictions, suggesting a potential pathway for training data creation. Overall our best performance was achieved by the ”GPT-4 few-shot with context” model at 93.9% accuracy. Our accuracies rival or even surpass the state-of-the-art accuracies achieved on other datasets. Despite the near human-level performance, this model was slow and expensive to operate. Based on these findings we recommend the use of our ”KevaBERT” model for sentiment classifications and a separate GPT-4 based model for text summarization.
  • Penttinen, Toni (2020)
    Antimoni on raskasmetalli, jonka olemassaoloon ja vaaroihin on havahduttu parin viimeisen vuosikymmenen aikana. Ongelmallisia kohteita antimonin suhteen ovat erityisesti kaivosympäristöjen vesistöt ja maaperät. Radiokemian kannalta oleellisempi ongelmapaikka on ydinvoimalaitosten primääripiirin vesi, jossa tavataan gamma-aktiivisia antimonin isotooppeja 122Sb, 124Sb ja 125Sb. Isotooppeja muodostuu piirin korroosiotuotteiden aktivaation seurauksena. Tutkimusta liittyen antimonin erotukseen vesimatriisista on tehty varsin vähän, ja uusia, tehokkaampia menetelmiä on pyritty kehittämään. Yksi tutkimuksen alaisista materiaaleista on ollut jo ennestään paljon tutkitun zirkoniumdioksidin, eli zirkonian, toiminta antimoniadsorbenttina. Adsorption hyötyihin kuuluvat sen yksinkertaisuus ja edullisuus. Kirjallisuuskatsauksessa tutustutaan zirkoniaan ja sen ominaisuuksiin. Tutustutaan myös seikkoihin, jotka puoltavat zirkonian saattamista nanokuitumuotoon. Työn kokeellisessa osiossa tutkittiin nanokuitumuotoisen zirkonian kykyä toimia antimoniadsorbenttina vesifaasissa. Lantaani- ja vanadiinidouppausten vaikutusta kuidun adsorptiokapasiteettiin selvitettiin. Seostamalla zirkoniaa lantaanilla saatiin kuidun antimoniadsorptiokapasiteettia parannettua. Tutkittuja adsorptioon liittyviä parametreja olivat kuitujen isoelektrinen piste, täyttöarvo, jakaantumiskerroin ja sorptioprosentti. Kilpailevan anionin vaikutusta kuidun isoelektriseen pisteeseen tutkittiin sulfaattilisäyksillä. Kuidun selektiivisyyttä selvitettiin osaltaan tutkimalla kuidun adsorptio-ominaisuuksia seleenin suhteen.
  • Jortikka, Santeri (2022)
    Measurement of alpha-active actinides requires separation from other alpha emitting radionuclides. A method of actinide separation was needed for the primary coolant water of Loviisa Nuclear Power Plant. A method published by Eichrom Ltd was chosen to be evaluated, this method utilises a vacuum box with stacked TEVA / TRU columns which speeds up and eases the analysis process. The method can be used to separate americium, curium, plutonium and uranium from a water samples and it gave excellent results both with reference samples and primary coolant water. The separation was also tested with other more difficult matrices: ion exchange resins, surface swipes, aerosol filters and process waste waters. Pretreatment methods for these matrices were assessed and tested to reduce the sample to a soluble form that could be loaded to the separation system. DGA resin based methods were tested for both gross-alpha and nuclide specific analyses. The gross-alpha method with DGA was fast, efficient and reliant. Gross alpha counting samples could be produced within hours and element fraction samples could be produced in 1 - 2 days. This combined with the good recoveries of all fractions meant shorter counting times to reach the minimal detectable activities (MDAs) required. The literature review part takes a look into recent interesting topics related to actinide separation and analysis from similar matrices discussed in the the experimental section. Different extraction chromatography resins are discussed.
  • Hedman, Peter (2015)
    The focus of this thesis is to accelerate the synthesis of physically accurate images using computers. Such images are generated by simulating how light flows in the scene using unbiased Monte Carlo algorithms. To date, the efficiency of these algorithms has been too low for real-time rendering of error-free images. This limits the applicability of physically accurate image synthesis in interactive contexts, such as pre-visualization or video games. We focus on the well-known Instant Radiosity algorithm by Keller [1997], that approximates the indirect light field using virtual point lights (VPLs). This approximation is unbiased and has the characteristic that the error is spread out over large areas in the image. This low-frequency noise manifests as an unwanted 'flickering' effect in image sequences if not kept temporally coherent. Currently, the limited VPL budget imposed by running the algorithm at interactive rates results in images which may noticeably differ from the ground-truth. We introduce two new algorithms that alleviate these issues. The first, clustered hierarchical importance sampling, reduces the overall error by increasing the VPL budget without incurring a significant performance cost. It uses an unbiased Monte Carlo estimator to estimate the sensor response caused by all VPLs. We reduce the variance of this estimator with an efficient hierarchical importance sampling method. The second, sequential Monte Carlo Instant Radiosity, generates the VPLs using heuristic sampling and employs non-parametric density estimation to resolve their probability densities. As a result the algorithm is able to reduce the number of VPLs that move between frames, while also placing them in regions where they bring light to the image. This increases the quality of the individual frames while keeping the noise temporally coherent — and less noticeable — between frames. When combined, the two algorithms form a rendering system that performs favourably against traditional path tracing methods, both in terms of performance and quality. Unlike prior VPL-based methods, our system does not suffer from the objectionable lack of temporal coherence in highly occluded scenes.
  • Rintamäki, Annukka (2016)
    The ultramafic rocks in the Komati Complex, Barbeton greenstone belt, South Africa have been serpentinized thoroughly and carbonated with variable intensities. Conditions of the serpentinization and carbonation of the ultramafic rocks in the Komati Complex were studied using serpentine phase characterization by Raman spectroscopy, calcite hosted fluid inclusion microthermometry and chlorite geothermometry. Three serpentine phases, lizardite, chrysotile, and antigorite occur in four samples studied with Raman spectroscopy. Antigorite dominates the serpentine mineralogy in two of the samples and other two have large amounts of lizardite and antigorite, chrysotile being a minor constituent in three of the four samples. Abundant antigorite indicates serpentine crystallization above a temperature of ~ 320 °C. Fluid inclusion petrography and microthermometric data revealed four fluid inclusion assemblages (FIA), FIA 1 - FIA 4 in an order from earliest to latest entrapment. The FIAs have homogenization temperatures in the ranges of ~170 - 240 °C (FIA 1), 154 - 163 °C (FIA 2), 149 - 180 °C (FIA 3), 112 - 137 °C (FIA 4). Relatively constant NaCl equivalent salinities in the range of 6.4 - 11 wt-% were recorded for the FIAs 1 and 2, similar salinities were indicated for the FIAs 3 and 4. Chlorite geothermometry yielded temperatures in the approximate range of 150 - 250 °C. Chlorite crystallization is texturally indicated to be related to the formation of the calcite that hosts the fluid inclusions. The overlap of chlorite geothermometry temperatures and the homogenization temperatures of the earliest fluid inclusions (FIA 1) indicates fluid inclusion entrapment at pressures lower than 200 - 300 bar and at temperatures equal to or slightly above the recorded homogenization temperatures. These pressure and temperature estimations suggest that carbonation occurred in a seafloor environment at moderate hydrothermal conditions. The properties of carbonate hosted fluid inclusions resemble those of fluid inclusions reported from Archean greenstone belts and interpreted as Archean seawater by previous contributions. Calcite hosted fluid inclusions may, thus, represent entrapped Archean seawater.
  • Tilles, Jan (2020)
    Serverless Computing aka Function as a Service is a cloud service model in which cloud provider manages computing resources and tenants deploy their code without knowing the details behind the underlying infrastructure. The promise of serverless is to drive the costs down so that a tenant pays only for the computing resources that it actually utilizes instead of paying for idle containers or virtual machines. In this thesis, we discuss that Serverless Computing does not always fulfill these requirements. For instance, some serverless frameworks keep certain resources, such as containers or functions, idle in order to reduce latency during function invocation. This may be particularly problematic in edge domains where computing power and resources are limited. In Function as a Service, the smallest unit of deployment is a function. These functions can be used, for example, to deploy traditional microservice-based applications. Serverless computing allows a tenant to run and scale functions with high availability. Serverless Computing also includes some tradeoffs: developers does not have so much of control over the underlying environment, testing of serverless functions is cumbersome, and commercial cloud service providers have a high degree of lock-in in their serverless technologies. A serverless application is stateless by its nature, and it runs in a stateless container that is event-triggered and managed by the cloud provider. A serverless application can access databases but, in general, state related to the function itself is not stored in files or databases. A number of commercial offerings and a wide range of open-source serverless frameworks are available. In this thesis, we present an overview of the different alternatives and show a qualitative comparison. We also show our benchmarking results with OpenFaaS running on an Kubernetes edge cloud (Raspberry Pi) based on algorithms typically utilized in machine learning.
  • Mäkeläinen, Sami (Helsingin yliopistoUniversity of HelsinkiHelsingfors universitet, 2006)
    The mobile phone has, as a device, taken the world by storm in the past decade; from only 136 million phones globally in 1996, it is now estimated that by the end of 2008 roughly half of the worlds population will own a mobile phone. Over the years, the capabilities of the phones as well as the networks have increased tremendously, reaching the point where the devices are better called miniature computers rather than simply mobile phones. The mobile industry is currently undertaking several initiatives of developing new generations of mobile network technologies; technologies that to a large extent focus at offering ever-increasing data rates. This thesis seeks to answer the question of whether the future mobile networks in development and the future mobile services are in sync; taking a forward-looking timeframe of five to eight years into the future, will there be services that will need the high-performance new networks being planned? The question is seen to be especially pertinent in light of slower-than-expected takeoff of 3G data services. Current and future mobile services are analyzed from two viewpoints; first, looking at the gradual, evolutionary development of the services and second, through seeking to identify potential revolutionary new mobile services. With information on both current and future mobile networks as well as services, a network capability - service requirements mapping is performed to identify which services will work in which networks. Based on the analysis, it is far from certain whether the new mobile networks, especially those planned for deployment after HSPA, will be needed as soon as they are being currently roadmapped. The true service-based demand for the 'beyond HSPA' technologies may be many years into the future - or, indeed, may never materialize thanks to the increasing deployment of local area wireless broadband technologies.
  • Mäntysaari, Ville (Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2007)
    With the recent increase in interest in service-oriented architectures (SOA) and Web services, developing applications with the Web services paradigm has become feasible. Web services are self-describing, platform-independent computational elements. New applications can be assembled from a set of previously created Web services, which are composed together to make a service that uses its components to perform a certain task. This is the idea of service composition. To bring service composition to a mobile phone, I have created Interactive Service Composer for mobile phones. With Interactive Service Composer, the user is able to build service compositions on his mobile phone, consisting of Web services or services that are available from the mobile phone itself. The service compositions are reusable and can be saved in the phone's memory. Previously saved compositions can also be used in new compositions. While developing applications for mobile phones has been possible for some time, the usability of the solutions is not the same as when developing for desktop computers. When developing for mobile phones, the developer has to more carefully consider the decisions he is going to make with the program he is developing. With the lack of processing power and memory, the applications cannot function as well as on desktop PCs. On the other hand, this does not remove the appeal of developing applications for mobile devices.
  • Talja, Sauli (2013)
    The thesis concentrates on to evaluate challenges in the business process management and the need for Service-oriented process models in telecommunication business to alleviate the integration work efforts and to reduce total costs of ownership. The business aspect concentrates on operations and business support systems which are tailored for communication service providers. Business processes should be designed in conformance with TeleManagement Forum's integrated business architecture framework. The thesis rationalizes the need to transform organizations and their way of working from vertical silos to horizontal layers and to understand transformational efforts which are needed to adopt a new strategy. Furthermore, the thesis introduces service characterizations and goes deeper into technical requirements that a service compliant middleware system needs to support. At the end of the thesis Nokia Siemens Networks proprietary approach – Process Automation Enabling Suite is introduced, and finally the thesis performs two case studies. The first one is Nokia Siemens Networks proprietary survey which highlights the importance of customer experience management and the second one is an overall research study whose results have been derived from other public surveys covering application integration efforts.
  • Varjus, Tomi (2015)
    Enäjärvi is a shallow, longish lake in Southern Finland. The factors that led to its hypereutrophic state are a water level adjustment during the last century and a 25 years lasting point source nutrient load from sewage waters of Nummela. The restoration aimed at improving the status of the lake started in 1993. The improvement actions have included, inter alia, the oxidation of the hypolimnion, food chain restoration by removing roaches and the reduction of scatter load by building sedimentation basins, wetlands and border strips to the lake's catchment area. The lake is still highly eutrophic. Its surface sediment has been studied in two different years before. The first one was made in 1991, two years before the start of the restoration projects and the second one in 1999. In 1991 it was found out that the surface sediment was completely dark sulphide-colored gyttja that had poor nutrient retaining capacity. In the 1999 study the sediment had turned back to oxidiced, healthy sediment that worked as a sink for nutrients. The purpose of this thesis is to repeat those two studies. Together they will account of phosphorus retention monitoring in Enäjärvi for more than 20 years. The sampling was carried out in early spring 2013 from the same sample sites as in the earlier studies. The samples were taken from three different depths from 30 sample points. They were analyzed for water content, loss on ignition and total phosphorus. In addition the sediment surface layer was also analyzed for phosphorus fractions. The results were compared to previous findings with diagrams, percent change analysis, Sperman's rank correlation coefficients and regression line analysis. The water quality data collected by Economic Development, Transport and the Environment centre was also used to support the findings. The results of this study show that the restoration activities carried out on the lake have reduced the scattered loading to the lake and also bioturbation based leaching of phosphorus. The good quality of the surface sediment and the high content of redox-sensitive phosphorus fraction that increases with depth point out that the oxygen levels have remained high enough to bind phosphorus to metal oxides. Though at some sample points sediment surface had iron sulphide precipitates that point to depletion in oxygen level at the hypolimnion to some extent.
  • Virolainen, Antti (2023)
    Tämä tutkielma käsittelee suomalaisten kaupunkien tekemää yhteisö- ja osallisuustyötä. Kaupungit ovat toteuttaneet jo useiden vuosikymmenien ajan erilaisia asukkaiden aktivoimiseen ja yhteisöllisyyden lisäämiseen tähtääviä projekteja, ja yhteisöllisyyden edistäminen on yleinen tavoite kaupunkien strategioissa. Strategiatasolla yhteisön ja yhteisöllisyyden käsitteet esiintyvät kuitenkin heikosti konkretisoituina. Lisäksi yhteisöllisyys oletetaan usein yksiselitteisen myönteiseksi ilmiöksi, vaikka alueellisen yhteisöllisyyden kääntöpuolena saattaa esiintyä esimerkiksi paikallista kontrollia, leimaamista ja syrjimistä. Kaupunkien yhteisöhankkeiden tavoitteiksi on määritelty mm. asuinalueen sosiaalisen koheesion edistäminen sekä asukkaiden mielialan kohentaminen erilaisten tapahtumien ja projektien kautta. Aiemmissa tutkimuksissa yhteisöhankkeita on kuitenkin tutkittu myös poliittisemmasta näkökulmasta. Yhteisöhankkeissa on nähty kiinnekohtia mm. kommunitarismia suosineeseen ”kolmannen tien” politiikkaan. Kommunitarismille ja kolmannen tien periaatteille tyypillisiä piirteitä ovat esim. puhtaasti valtiollisten ratkaisumallien sivuuttaminen ja moraalisesti vastuullisten paikallisyhteisöjen ennallistaminen. Kriittisen näkökulmasta paikallisyhteisöjen aktivointi on tulkittu yhteiskuntapoliittiseksi keinoksi, jolla on arvostelijoiden mukaan pyritty korvaamaan valtiollista sosiaalipoliittista järjestelmää. Tutkielmassa on analysoitu kaupunkien yhteisö- ja osallisuustyöstä vastaavien hankeasiantuntijoiden ajatuksia yhteisöllisyydestä ja sen mahdollisuuksista. Tutkimuskysymykseni ovat :1) Miten yhteisöllisyyttä nähdään syntyvän ja 2) millaisia vaikutuksia yhteisö- ja osallisuustyöllä halutaan tuottaa? Avaan työssäni sitä, mitä yhteisöllisyys haastatelluille hankeasiantuntijoille merkitsee, minkälaisiin ongelmiin yhteisöllisyyden pitäisi toimia ratkaisuna ja millä tavalla yhteisöllisyys nämä ongelmat ratkaisisi. Aineisto koostuu viidelle eri suomalaiskaupungille työskentelevien hankeasiantuntijoiden puolistrukturoiduista teemahaastatteluista (n=8), jotka on analysoitu teoriaohjaavalla sisällönanalyysillä. Analyysi nojaa affektiivisen kansalaisuuden (affective citizenship) käsitteeseen, joka kuvaa sitä tapaa, miten affektit ja kansalaisuus yhdistyvät biovaltaan ja hallinnallisuuteen. Kansalaisia saatetaan esimerkiksi rohkaista tuntemaan kiitollisuutta ja uskollisuutta kansakuntaa kohtaan ja toisten tunteiden oikeutus saatetaan tunnustaa enemmän kuin toisten. Analyysin tukena on käytetty myös yhteisön ja sosiaaliseen pääoman käsitteisiin liittyviä teorianäkökulmia sekä Mark Granovetterin (1973) heikkojen siteiden teoriaa. Tutkimustulosten mukaan kaupunkien yhteisö- ja osallisuustyötä voi tulkita affektiivisen kansalaisuuden näkökulmasta. Hankkeilla tavoitellun yhteisöllisyyden kautta toivotaan tietynlaista kansalaisuutta, jossa hankealueiden asukkaat huomioisivat yhteisön paremmin omassa käytöksessään ja toiminnassaan. Yhteisöllisyyden ajatellaan tuottavan alueille turvallisuuden tunnetta, sosiaalista kontrollia ja luottamusta asukkaiden kesken. Luottamuksen lisäämistä voidaan tulkita myös sosiaalisen pääoman ja asukkaiden välisien heikkojen siteiden lisäämisenä. Yhteisöllisyyden ajatellaan johtavan parempiin väestösuhteisiin ja sujuvampaan kanssakäymiseen paikallisesti. Lisäksi yhteisöllisyyden ajatellaan lisäävän asukkaiden omistajuutta alueisiinsa. Omistajuus puolestaan ilmenisi aktiivisena osallistumista alueen asioiden hoitoon ja haluna pitää alueesta huolta. Yhteisöllisyydellä pyritään siis aktiivisemman ja alueeseensa sitoutuneemman kansalaisuuden edistämiseen. Tutkimustulosten perusteella kaupunkien yhteisöhankkeilla ei nähdä edellytyksiä rakenteellisten ongelmien korjaamiseen, mutta yhteisöllisyydellä nähdään haastatteluaineiston perusteella mahdolliseksi lisätä rakenteellisten ongelmien seurauksista kärsivien asukkaiden psykososiaalista hyvinvointia. Tutkimustulokset tekevät näkyväksi yhteisötyön suunnitteluun vaikuttavia premissejä, mikä mahdollistaa kaupunkien yhteisö- ja osallisuustyötä ohjaavien taustaolettamien kriittisen arvioinnin.
  • Lehtonen, Leevi (2021)
    Sex differences can be found in most human phenotypes, and they play an important role in human health and disease. Females and males have different sex chromosomes, which are known to cause sex differences, as are differences in the concentration of sex hormones such as testosterone, estradiol and progesterone. However, the role of the autosomes has remained more debated. The primary aim of this thesis is to assess the magnitude and relevance of human sex-specific genetic architecture in the autosomes. This is done by calculating sex-specific heritability estimates and genetic correlation estimates between females and males, as well as comparing these to sex differences on the phenotype level. Additionally, the heritability and genetic correlation estimates are compared between two populations, in order to assess the magnitude of sex differences compared to differences between populations. The analyses in this thesis are based on sex-stratified genome-wide association study (GWAS) data from 48 phenotypes in the UK Biobank (UKB), which contains genotype data from approximately 500 000 individuals as well as thousands of phenotype measurements. A replication of the analyses using three phenotypes was also made on data from the FinnGen project, with a dataset from approximately 175 000 individuals. The 48 phenotypes used in this study range from biomarkers such as serum testosterone and albumin levels to general traits such as height and blood pressure. The heritability and genetic correlation estimates were calculated using linkage disequilibrium score regression (LDSC). LDSC fits a linear regression model between test statistic values of GWAS variants and linkage disequilibrium (LD) scores calculated from a reference population. For most phenotypes, the heritability and genetic correlation results show little evidence of sex differences. Serum testosterone level and waist-to-hip ratio are exceptions to this, showing strong evidence of sex differences both on the genetic and the phenotype level. However, the overall correlation between phenotype level sex differences and sex differences in heritability or genetic correlation estimates is low. The replication in the FinnGen dataset for height, weight and body mass index (BMI), showed that for these traits the differences in heritability estimates and genetic correlations between the Finnish and UK populations are comparable or larger than the differences found between males and females.