Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Asikanius, Niina (2023)
    This thesis is an ethnographic exploration into co-production evaluation. The aim of this thesis is to evaluate outcomes of a knowledge co-production workshop in the context of Finnish urban planning using a co-production evaluation framework. For the context of the research, the status of allotment gardens in urban planning was studied. Central concepts also include participation and the status of knowledge in the urban planning context. I collected my research data by participating in the workshop process as a co-facilitator and co-producer in a garden workshop held in Pähkinärinne allotment plots in June 2022. I carried out the research using qualitative research methods, participatory observation. Field notes and the material and data the garden workshop produced are the main body of data. The results show that the workshop did produce a tangible outcome, a usable concept for the Pähkinärinne allotment gardens. When situated in the Finnish urban planning context, analysis shows that implementation may be difficult due to institutional and governance barriers. Intangible impacts were produced in the form of social learning. This entailed the identification of existing social networks in and outside of the allotment plots and their development through social capital. These effects fare better in the Finnish context through self-governance and self-organization. As a conclusion, it can be said that the knowledge co-production process was a successful process but in the Finnish urban planning context bottom-up initiatives can be difficult to implement due to institutional barriers and city-led planning and participation.
  • Blomqvist, Sofia (2024)
    The matter in neutron stars exist under extreme conditions, and the cores of these stars harbour densities unreachable in any laboratory setting. Therefore, this unique environment provides an exceptional opportunity to investigate high-density matter, described by the theory of Quantum Chromodynamics (QCD). This thesis centers on the exploration of twin stars, hypothetical compact objects that extend beyond the neutron star sequence. Originating from a first-order phase transition between hadronic matter and quark matter, our focus is on understanding the constraints on these phase transitions and their effect on the observable properties of twin stars. In our investigation of twin stars, we construct a large ensemble of possible equations of state featuring a strong first-order phase transition. We approximate the low- and high-density regions with polytropic form and connect them to chiral effective field theory results at nuclear densities and extrapolated perturbative QCD at high densities. The resulting equations of state are then subjected to astrophysical constraints obtained from high-mass pulsars and gravitational wave detections to verify their compatibility with observations. Within our simple study, we identify two distinct types of twin stars, each providing a clear signature in macroscopic observables. These solutions originate from separate regions in the parameter space, with both regions being relatively small. Twin stars in our approach generally obtain small maximum masses, while the part of the sequence corresponding to neutron stars extends to large radii, indicating that these solutions only marginally pass the astrophysical constraints. Finally, we find that all twin stars obtain sizable cores of quark matter.
  • Häkkinen, Anu (2017)
    Kawah Ijen is the picturesque crater of the Ijen volcano located in Eastern Java, Indonesia. However, it is not just any volcano crater, as it happens to be the locus of labour-intensive sulphur mining operation. Each day up to 15 tons of sulphur is extracted from the Ijen crater by the 350 men working as manual miners. These men carry even 100 kilogram loads of sulphur out from the crater with bare brawn and the work is with no doubt burdensome. Kawah Ijen's natural beauty has also caught the interest of tourists', and the crater has become commodified as a tourism destination, visited by hundreds of international tourists each day. Thus the storyline of this master's thesis is two-fold. The first research objective scrutinizes the Kawah Ijen sulphur mine from a commodity chain perspective, emphasizing the tough work the sulphur miners have to bear in order to satisfy the needs of the consumers at the end of the chain. The second, and the essential objective of this research in turn interrogates how the presence of the sulphur miners has become also an inevitable part of the Kawah Ijen tourism experience. In this the aspiration is to elucidate how the sulphur miners have become aestheticized as a Global South tourism attraction. In other words, this research aims to interrogate the peculiarity of this reality, by exploring how both trade and culture, and human and commodity mobilities are entangled and enshrouded within the crater of the Ijen volcano. In human geography, a research framework of 'Follow the thing' has been adopted by scholars in order to study the geographically far-flung production chains of consumer goods. As a framework it aims to make critical political-economic connections between the consumers and distant, and often also underprivileged, producers. In this Marxist-influenced undertaking emphasis is placed particularly on commodity fetishism. This notion has been mobilized to illuminate how consumers have become alienated from the means of production, in their symbolically-laden everyday consumption. As sulphur is a raw material needed in the production processes of many goods such as white sugar, fertilizers, medicines, and rubber, this research shows how these commodities were 'followed' into their origins to this particular sulphur mine. During a period of field work, a method of participant observation was utilized to get contextual understanding of this production site. The initial research objective is therefore to make connections and create awareness of the inequalities within commodity production networks. In the final research objective of this master's thesis, a postcolonial approach is mobilized to critically interrogate this initial setting, in which the miners are seen as poor and stagnant producers. Thus the Kawah Ijen tourists are taken under lens in order to gain understanding of this touristic encounter nuanced with cross-cultural and socio-economic differences between the tourists and the miners. Therefore the setting of Kawah Ijen will not only be observed as a place of production, but also as a site - and object - of consumption. By analysing blogged travel stories written by the tourists themselves, this research aims to illuminate what the tourism experience of the Kawah Ijen is about in the realm of consumption. Special attention is given to how the encounter with sulphur miners has become a constitutive part of the adventurous and authentic tourism experience of Kawah Ijen. The blog post analysis on the Kawah Ijen tourism narrative shows how the imaginaries of the sulphur miner as the 'Other' are adhered to, as the tourists construct their travel identities, make meaning of their experiences and finally represent their experience to the outside world. Finally this research aims to make ruptures to Global South fetishism by elucidating how the Kawah Ijen sulphur mine has become both commoditized and fetishized in its own right. In this fetishzation process the sulphur miners are depicted as poor and primitive, which as categories act as symbols for authentic tourism consumption in the social frameworks of the tourists. However, the aim is not to demonize the tourists, but to give recognition to the nuanced personal and social realities they are embedded in their consumption. Hence, the tourism experience of Kawah Ijen is constructed through a point of view more sensitive to the subjective negotiation of authenticity. It is argued that the Kawah Ijen tourism experience is a process in which the meaning of the experience is negotiated in a wider framework, which is vicariously embedded in postcolonial discourse. Finally, it is concluded that although there is some unequal power relations at presence in the tourism consumption of Kawah Ijen, the tourism can be the means to make more sustainable living for the miners. The leapfrog from the mining to tourism has to be only carried out in a deliberate way with respect to all of the stakeholders.
  • Waltari, Otto Kustaa (2013)
    Advanced low-cost wireless technologies have enabled a huge variety of real life applications in the past years. Wireless sensor technologies have emerged in almost every application field imaginable. Smartphones equipped with Internet connectivity and home electronics with networking capability have made their way to everyday life. The Internet of Things (IoT) is a novel paradigm that has risen to frame the idea of a large scale sensing ecosystem, in which all possible devices could contribute. The definition of a thing in this context is very vague. It can be anything from passive RFID tags on retail packaging to intelligent transducers observing the surrounding world. The amount of connected devices in such a worldwide sensing network would be enormous. This is ultimately challenging for the current Internet architecture which is several decades old and is based on host-to-host connectivity. The current Internet addresses content by location. It is based on point-to-point connections, which eventually means that every connected device has to be uniquely addressable through a hostname or an IP address. This paradigm was originally designed for sharing resources rather than data. Today the majority of Internet usage consists of sharing data, which is not what it was originally designed for. Various patchy improvements have come and gone, but a thorough architectural redesign is required sooner or later. Information-Centric Networking (ICN) is a new networking paradigm that addresses content by name instead of location. Its goal is to replace the current where with what, since the location of most content on the Internet is irrelevant to the end user. Several ICN architecture proposals have emerged from the research community, out of which Content-Centric Networking (CCN) is the most significant one in the context of this thesis. We have come up with the idea of combining CCN with the concept of IoT. In this thesis we look at different ways on how to make use of the hierarchical CCN content naming, in-network caching and other information-centric networking characteristics in a sensor environment. As a proof of concept we implemented a presentation bridge for a home automation system that provides services to the network through CCN.
  • Tilli, Tuomo (2012)
    BitTorrent is one of the most used file sharing protocols on the Internet today. Its efficiency is based on the fact that when users download a part of a file, they simultaneously upload other parts of the file to other users. This allows users to efficiently distribute large files to each other, without the need of a centralized server. The most popular torrent site is the Pirate Bay with more than 5,700,000 registered users. The motivation for this research is to find information about the use of BitTorrent, especially on the Pirate Bay website. This will be helpful for system administrators and researchers. We collected data on all of the torrents uploaded to the Pirate Bay from 25th of December, 2010 to 28th of October, 2011. Using this data we found out that a small percentage of users are responsible for a large portion of the uploaded torrents. There are over 81,000 distinct users, but the top nine publishers have published more than 16% of the torrents. We examined the publishing behaviour of the top publishers. The top usernames were publishing so much content that it became obvious that there are groups of people behind the usernames. Most of the content published is video files with a 52% share. We found out that torrents are uploaded to the Pirate Bay website at a fast rate. About 92% of the consecutive uploads have happened within 100 seconds or less from each other. However, the publishing activity varies a lot. These deviations in the publishing activity may be caused by down time of the Pirate Bay website, fluctuations in the publishing activity of the top publishers, national holidays or weekdays. One would think that the publishing activity with so many independent users would be quite level, but surprisingly this is not the case. About 85% of the files of the torrents are less than 1.5 GB in size. We also discovered that torrents of popular feature films were uploaded to the Pirate Bay very fast after their release and the top publishers appear to be competing on who releases the torrents first. It seems like the impact of the top publishers is quite significant in the publishing of torrents.
  • Tuikka, Leevi (2023)
    On the hotter Paleoproterozoic Earth, the regime of plate tectonics was likely in transition, including features from the early Earth but having e.g. subduction already established. The different mode of tectonics affected also on the deformation on plate boundaries, on orogenesis, for example. Due to the increased lithospheric temperatures, the Paleoproterozoic orogens were hotter and lower. Such conditions made also HP-LT metamorphism rare. In addition to the different temperature conditions, the past tectonics pose another challenge. Largely the Paleoproterozoic rocks that can be accessed today, are the remnants of ancient middle or lower crustal layers, which have exhumed due to deep levels of erosion. Hence, a great amount of evidence of past orogens is erased. To overcome this issue, geodynamic modeling is used to build set of 19 continent-continent collision models, with the temperature conditions ranging from the Paleoproterozoic to the Phanerozoic. Additionally, P-T-t paths are recorded in the models for comparison of pressure-temperature conditions using pseudosection diagrams. Another significant quantity governing the deformation on collisional continental margins, is the angle of convergence obliquity. With roughly 60° obliquity, full strain partitioning should be triggered in a orogen, forming a strike-slip fault. In the models, different temperature cases with the obliquity angle are varied. The geodynamic modeling software to produce the models was DOUAR, a 3D thermo-mechanical code coupled with erosion model FastScape. The initial models with 35 km thick crust were not able to produce proper strain partitioning due to low resolution, so another set of models with 45 km thick crust was run. On top of that, the lower crustal strength was varied. Outcome of the thicker crust was wider orogens, and hence more space for strain partitioning to develop. However, strain partitioning was not able to be preserved that well for the whole 40 Ma, which was the runtime of the models. Though in terms of strain partitioning, the results were not ideal, the angle of obliquity affected on the crustal shear zones as well, in a interesting way.
  • Hemminki, Samuli (2012)
    In this thesis we present and evaluate a novel approach for energy-efficient and continuous transportation behavior monitoring for smartphones. Our work builds on a novel adaptive hierarchical sensor management scheme (HASMET), which decomposes the classification task into smaller subtasks. In comparison to previous work, our approach improves the task of transportation behavior monitoring on three aspects. First, by employing only the minimal set of necessary sensors for each subtask, we are able to significantly reduce power consumption of the detection task. Second, using the hierarchical decomposition, we are able to tailor features and classifiers for each subtask, improving the accuracy and robustness of the detection task. Third, we are able to extend the detectable motorised modalities to cover most common public transportation vehicles. All of these attributes are highly desirable for real-time transportation behavior monitoring and serve as important steps toward implementing the first truly practical transportation behavior monitoring on mobile phones. In the course of the research, we have developed an Android application for sensor data collection and utilized it to collect over 200 hours of transportation data, along with 2.5 hours of energy consumption data of the sensors. We apply our method on the data to demonstrate that compared to current state-of-art, our method offers higher detection accuracy, provides more robust transportation behavior monitoring and achieves significant reduction in power consumption. For evaluating results with respect to the continuous nature of the transportation behavior monitoring, we use event and frame-based metrics presented by Ward et al.
  • Rensing, Fabian (2024)
    Accurately predicting a ship’s fuel consumption is essential for an efficient shipping operation. A prediction model has to be regularly retrained to minimize drift between its predictions and the actual consumption of the ship since a ship’s performance is constantly changing because of weather influences and constant hull fouling. Continuous Learning (CL) promises repeated retraining of an ML model while also mitigating catastrophic forgetting. The so-called catastrophic forgetting happens when a model is trained on new data without proper measures to “remind” the model of its previous knowledge. In the context of Ship Performance Prediction, this might be previously encountered weather or performance patterns in certain conditions. This thesis explores the adaptability of CL to set up a production-ready training pipeline to regularly retrain a model that predicts a ship’s fuel consumption.
  • Prittinen, Taneli (2017)
    Tässä työssä kehitettiin SQUID-pohjainen laitteisto helium-3:lla tehtäviä NMR-mittauksia varten ja suoritettiin mittauksia sekä nk. jatkuvan aallon (continous wave) NMR:llä että pulssitetun aallon (pulsed wave) menetelmällä. Helium-3:n korkean hinnan (n. 5000 euroa/litra) takia työssä käytettiin testitarkoituksiin NMR-materiaaleina myös fluoria sisältävää teflonia ja vetyä sisältävää jäätä. Laitteisto suunniteltiin ja rakennettiin Aalto-yliopiston O.V. Lounasmaa -laboratoriossa, nykyiseltä nimeltään Low Temperature Laboratory. NMR eli ydinmagneettinen resonanssi on ilmiö jossa ydinspinilliset atomiytimet sijoitetaan staattiseen magneettikenttään ja viritetään niitä ulkoisella sähkömagneettisella säteilyllä, jonka jälkeen niiden viritystila purkautuu vapauttaen NMR-signaalin. Tällä tavalla pystytään tutkimaan monia aineen eri ominaisuuksia. SQUID eli Superconducting Quantum Interference Detector taas on nimensä mukaisesti kvantti-interferenssiin perustuva laite, joka kykenee havaitsemaan erittäin pieniä magneettikenttiä. NMR:n yhteydessä se on tehokas esivahvistin, jonka avulla voidaan havaita hyvin pieniäkin signaaleja. Tässä työssä sillä on tarkoitus parantaa signaali-kohinasuhdetta verrattuna perinteisiin puolijohde-esivahvistimiin ja saada aikaan ilmaisin jolla voidaan mitata myös matalammilla taajuuksilla kuin tutkimusryhmällä on nykyisin käytössä. Suoritettujen mittausten perusteella laitteisto kykeni havaitsemaan NMR-signaalin jatkuvan aallon menetelmällä jokaisesta tutkitusta aineesta. Pulssitettuja mittauksia ei vielä toistaiseksi onnistuttu tekemään onnistuneesti johtuen heliumin pitkähköstä, n. 30 sekunnin, relaksaatioajasta joka teki pidemmistä mittaussarjoista vaikeita toteuttaa. Vastaavasti kahdella kiinteällä aineella, teflonilla ja jäällä, resonanssin leveys oli niin suuri että energian absorbointi pulsseilla näytteeseen olisi hankalaa ja tuottaisi signaaleja joiden pienuus tekisi niistä hankalasta havaittavia, joten näitä aineita tutkittiin tässä työssä vain jatkuvan aallon menetelmällä.
  • Wang, Sai (2015)
    Robustness testing is an important aspect in web service testing. It focuses on the service's ability to deal with invalid input. Therefore, the test cases of robustness testing aims at good coverage on input conditions. Behaviours of participate services are described in BPEL contract. Services communicate with each other by sending SOAP messages. BPEL process is seen as a graph with nodes and edges which stand for activities and messages. Due to the feature of business process, we extend the robustness of web services in SOA ecosystems based on the traditional robustness definition. The robustness test case generation focuses on test paths or message sequences generation and test data in SOAP messages generation. Web service contract contains information related to test case generation. In this thesis, we divide the contracts into three levels: document level contract, model level contract and implementation level contract. Model level contract provides the information for test case generation. BPEL contract helps test paths generation and WSDL contract helps test data generation. By analysing the contents in contract, test cases can be generated. Petri net and graph-based method are chosen as a method for message sequences generation. Data perturbation technology is used for invalid test data generation.
  • Lipsanen, Mikko (2022)
    The thesis presents and evaluates a model for detecting changes in discourses in diachronic text corpora. Detecting and analyzing discourses that typically evolve over a period of time and differ in their manifestations in individual documents is a challenging task, and existing approaches like topic modeling are often not able to reach satisfactory results. One key problem is the difficulty of properly evaluating the results of discourse detection methods, due in large part to the lack of annotated text corpora. The thesis proposes a solution where synthetic datasets containing non-stable discourse patterns are generated from a corpus of news articles. Using the news categories as a proxy for discourses allows both to control the complexity of the data and to evaluate the model results based on the known discourse patterns. The complex task of extracting topics from texts is commonly performed using generative models, which are based on simplifying assumptions regarding the process of data generation. The model presented in the thesis explores instead the potential of deep neural networks, combined with contrastive learning, to be used for discourse detection. The neural network model is first trained using supervised contrastive loss function, which teaches the model to differentiate the input data based on the type of discourse pattern it belongs to. This pretrained model is then employed for both supervised and unsupervised downstream classification tasks, where the goal is to detect changes in the discourse patterns at the timepoint level. The main aim of the thesis is to find out whether contrastive pretraining can be used as a part of a deep learning approach to discourse change detection, and whether the information encoded into the model during contrastive training can generalise to other, closely related domains. The results of the experiments show that contrastive pretraining can be used to encode information that directly relates to its learning goal into the end products of the model, although the learning process is still incomplete. However, the ability of the model to generalise this information in a way that could be useful in the timepoint level classification tasks remains limited. More work is needed to improve the model performance, especially if it is to be used with complex real world datasets.
  • Kangasaho, Vilma Eveliina (2018)
    The goal of this study is to ascertain whether methane (CH4) emissions can be estimated source-wise by utilising stable isotope observations in the CarbonTracker Data Assimilation System (CTDAS). The global CH4 budget is poorly known and there are uncertainties in the spatial and temporal distributions as well as in the magnitude of different sources. In this study CTDAS-13CH4 atmospheric inverse model is developed. CTDAS-13CH4 is based on ensemble Kalman filer (EnKF) and used to estimate CH4 fluxes on a region and weekly resolution by implementing CH4 and δ13C-CH4 observations. Anthropogenic biogenic emissions (rice cultivation, landfills and waste water treatments and enteric fermentation and manure management) and anthropogenic non-biogenic emissions (coal, residential and oil and gas) are optimised. Different emission sources can be identified by using process-specific isotopic signature values, δ13C-CH4, because different processes produce CH4 with different isotopic ratio. Optimisation of anthropogenic biogenic emissions increased the total emissions from the prior in eastern North America by 34%, while the optimisation of anthropogenic non-biogenic emissions increased only by 14%. In western North America the corresponding changes were −39% and 9%, respectively. In western parts of Europe, total emissions from prior increased in anthropogenic biogenic optimisation by 18% and decreased in non-biogenic by 3%. Optimisation of anthropogenic biogenic and non-biogenic emissions in the total CH4 budget did not give complete emission estimates, because the optimisation did not include all emission sources and source-specific δ13C-CH4 values were assumed not to vary regionally. However, the modelled concentrations from the optimisation of anthropogenic non-biogenic emissions agreed with the observations of CH4 concentration and δ13C-CH4 values better. Therefore, one could say that the optimisation of anthropogenic non-biogenic emissions was more successful. This study provides reliable information of the magnitude of anthropogenic biogenic and non-biogenic emissions in regions with sufficient observational coverage. The next step in evaluating the spatial and temporal distributions and magnitude of different CH4 sources will be optimising all emission sources simultaneously in a multi-year simulation.
  • Nummelin, Aleksi (Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2012)
    The Meridional overturning circulation (MOC) is one crucial component in Earth's climate system, redistributing heat round the globe. The abyssal limb of the MOC is fed by the deep water formation near the poles. A basic requirement for any successful climate model simulation is the ability to reproduce this circulation correctly. The deep water formation itself, convection, occurs on smaller scales than the climate model grid size. Therefore the convection process needs to be parameterized. It is, however, somewhat unclear how well the parameterizations which are developed for turbulence can reproduce the deep convection and associated water mass transformations. The convection in the Greenland Sea was studied with 1-D turbulence model GOTM and with data from three Argo floats. The model was run over the winter 2010-2011 with ERA-Interim and NCEP/NCAR atmospheric forcings and with three different mixing parameterizations, k-e, k-kL (Mellor-Yamada) and KPP. Furthermore, the effects of mesoscale spatial variations in the atmospheric forcing data were tested by running the model with forcings taken along the floats' paths (Lagrangian approach) and from the floats' median locations (Eulerian approach). The convection was found to happen by gradual mixed layer deepening. It caused salinity decrease in the Recirculating Atlantic Water (RAW) layer just below the surface while in the deeper layers salinity and density increase was clearly visible. A slight temperature decrease was observed in whole water column above the convection depth. Atmospheric forcing had the strongest effect on the model results. ERA-interim forcing produced model output closer to the observations, but the convection begun too early with both forcings and both generated too low temperatures in the end. The salinity increase at mid-depths was controlled mainly by the RAW layer, but also atmospheric freshwater flux was found to affect the end result. Furthermore, NCEP/NCAR freshwater flux was found to be large enough (negative) to become a clear secondary driving factor for the convection. The results show that mixing parameterization mainly alters the timing of convection. KPP parameterization produced clearly too fast convection while k-e parameterization produced output which was closest to the observations. The results using Lagrangian and Eulerian approaches were ambiguous in the sense that neither of them was systematically closer to the observations. This could be explained by the errors in the reanalyzes arising from their grid size. More conclusive results could be produced with the aid of finer scale atmospheric data. The results, however, clearly indicate that atmospheric variability in scales of 100 km produces quantifiable differences in the results.
  • Rimo, Eetu (2023)
    In this thesis I have examined wind gust cases in Finland that have occurred during the summer season between 2010 and 2021. The main goal of the thesis was to find convective wind gust cases of non-tornadic origin, also known as damaging straight-line winds, and find out whether the gust on the surface could have been, in theory, solely caused by the slow advection of strong upper-level winds to the surface or whether another factor, such as a strong downdraft, must have played a role in the creation of the gust. Convective wind gusts occur in Finland every summer, but despite this, the amount of research on them and the damage they can cause has been relatively small in the past compared to gusts caused by extratropical cyclones, for example. To find suitable wind gust cases, weather data from the Finnish Meteorological Institute (FMI) was downloaded. After scanning through the data to find cases, which were suspected of being convective origin, ERA5 reanalysis data developed by the European Centre for Medium-Range Weather Forecasts (ECMWF) was downloaded from the locations and times of the gusts’ occurrence. Also chosen for further examination, for comparison purposes, were wind gust cases suspected of being caused by extratropical cyclones. The FMI wind gust speed and wind speed data was visualized in line charts, while the ERA5 data values of wind speed, equivalent potential temperature and relative humidity were tabulated and visualized in vertical cross sections. The visualization was done with the help of Python’s matplotlib.pyplot library and the MetPy toolbox. The results indicated that the differences between gust cases caused by convection and gust cases caused by extratropical cyclones can be clearly seen from the reanalysis data. As for the convective cases themselves, the data indicated that in several of them the gust could have been caused by the slow advection of strong upper-level winds to the surface on its own, in theory at least. However, in the majority of the cases the data indicated that the gust was likely the result of a strong downdraft or possibly by a combination of a downdraft and advection. Besides this, the values of the examined parameters and their visualization revealed that damaging straight-line winds can occur under various conditions in Finland.
  • Ruiz, Paloma (2020)
    High-quality thin films deposited by Atomic layer deposition (ALD) are key components in numerous modern technological applications. The technique is extensively used in the semiconductor and photovoltaic industry, for example. ALD is an excellent technique for thin film deposition due to its characteristic self-limiting surface reactions that allow a reproducible, conformal, and precisely controlled coating on a substrate. Numerous new ALD materials are developed each year to advance technological innovations to new levels. However, occasionally a desired material cannot be produced directly by ALD. To still obtain the impressive features of ALD, an ALD thin film can be transformed chemically or physically in a manner that preserves the film-like structure of the original layer. This thesis explores these types of ALD film transformations, or conversions, by attempting the conversions of atomic layer deposited Al2O3 from a two-dimensional film to a tree-dimensional film structure, Ru to RuO2, Re to ReO3, ZnO to ZIF-8, and ZrOx to UiO-66 films. Current preparation methods, common applications, and general properties of these five materials are explored in the literature review. This provides an insight on some of the key features of the fabrication of these materials and what value the thin film structure brings them. It also highlights the challenges that are encountered regarding these processes. If the material has been obtained through conversion of an ALD thin film, the process is reviewed with detail. Additionally, the literature review explains the basics of conversion reactions as well as fundamentals of ALD. The experimental section focuses on studying and optimizing the distinct challenges as well as explores new methods for fabricating the materials through the conversion of ALD thin films. Conversion of zirconium oxide thin films to UiO-66 under terephthalic acid vapor was attempted with modest results. Ru thin films were converted into crystalline RuO2 under ambient and O2 atmospheres successfully and the processes seem promising for further research. Re and ReNx films were partially converted to ReO3 under O2, O3 and humid environments. The continuity of these films proved to be problematic. Factors affecting the formation of ZIF-8 and Al2O3 “grass" in the conversions of ZnO under 2-methylimidazole vapor and Al2O3 under heated water, respectively, were assessed, and the optimization of these processes was studied.
  • Peltonen, Jussi (2019)
    FINIX is a nuclear fission reactor fuel behaviour module developed at VTT Technical Research Centre of Finland since 2012. It has been simplified in comparison to the full-fledged fuel performance codes to improve its usability in coupled applications, by reducing the amount of required input information. While it has been designed to be coupled on a source-code level with other reactor core physics solvers, it can provide accurate results as a stand-alone solver as well. The corrosion that occurs on the interface between nuclear fuel rod cladding and reactor coolant is a limiting factor for the lifespan of a fuel rod. Of several corrosion phenomena, oxidation of the cladding has been studied widely. It is modelled in other fuel performance codes using semiempirical models based on several decades of experimental data. This work aims to implement cladding oxidation models in FINIX and validate them against reference data from experiments and the state-of-the-art fuel performance code FRAPCON-4.0. In addition to this, the models of cladding-coolant heat transfer and coolant conditions are updated alongside to improve the accuracy of the oxidation predictions in stand-alone simulations. The theory of the cladding oxidation, water coolant models and general structure of FINIX and reactor analysis will be studied and discussed. The results of the initially implemented cladding oxidation models contain large errors, which indicates that FINIX does not account for the axial temperature difference between the bottom and the top of the rod in the coolant. This was corrected with the updates to the coolant models, which calculate various properties of a water coolant based on International Association for the Properties of Water and Steam (IAWPS) industrial water correlations to solve the axial temperature increase in a bulk coolant. After these updates the predictions of cladding oxidation improved and the validity of the different oxidation models were further analyzed in the context of FINIX.
  • Knuutinen, Janne (2017)
    Copuloista on tullut yleinen työkalu finanssimaailman käyttötarkoituksiin. Tämän työn tavoitteena on esitellä copuloiden teoriaa ja sen soveltamista rahoitusriskien mallintamiseen. Copulat määritellään ja niihin liittyvää keskeistä teoriaa käydään läpi. Tärkeimpiä korrelaatiokonsepteja esitellään, muun muassa tunnusluvut Kendallin tau ja Spearmanin rho. Lisäksi copulaperheet, joihin eniten käytetyt copulat kuuluvat, määritellään. Copuloiden parametreja voi estimoida eri metodien avulla. Kolme tärkeintä loguskottavuusfunktioon perustuvaa metodia käydään läpi, samoin kuin Monte Carlo -menetelmä, jolla voidaan simuloida muuttujia copuloista. Esitellään häntäriippuvuus, joka on hyödyllinen käsite äärimmäisiä ilmiöitä mallinnettaessa. Value at Risk eli VaR on yksi tärkeimmistä sijoitusriskien riskimitoista. Uudelleenjärjestelyalgoritmiin perustuvan menetelmän avulla voidaan laskea huonoimmat ja parhaat VaR:n arvot. Menetelmän toimintaa havainnollistetaan järjestelemällä eräs matriisi algoritmin avulla niin, että nähdään huonoimman VaR:n yläraja. Menetelmää sovelletaan vielä kolmen eri osakkeen, Nokian, Samsungin ja Danske Bankin, useamman vuoden päivittäisistä tappioista koostetun matriisin uudelleenjärjestelyyn. Näin saatu huonoimman VaR:n yläraja on suurempi kuin historiallisen VaR:n arvo, joka laskettiin toisella menetelmällä. Tutkielman teorian käytännön soveltamista jatketaan vielä laskemalla osakkeiden tappioiden välisiä korrelaatioita. Nokian ja Deutsche Bankin tappioiden välisen korrelaatiokertoimen huomataan olevan arvoltaan suurin, ja todettaan, että niiden välistä riippuvuusrakennetta voidaan kuvata parhaiten t-copulalla.
  • Yi, Xinxin (2015)
    Problem: Helsinki psychotherapy study (HPS) is a quasi-experimental clinical trial, which is designed to compare the effects of different treatments (i.e. psychotherapy and psychoanalysis) on patients with mood and anxiety disorders. During its 5-year follow-ups from the year 2000 to 2005, repeated measurements were carried out at 0, 12, 24, 36, 48, 60 months. However, some individuals did not show up at certain data collection points or dropped out of the study forever, leading to the occurrence of missing values. This will prevent the applications of further statistical methods and violate the intention-to-treat (ITT) principle in longitudinal clinical trials (LCT). Method: Multiple Imputation (MI) has many claimed advantages in handling missing values. This research will compare different MI methods i.e. Markov chain Monte Carlo (MCMC), Bayesian Linear Regression (BLR), Predictive Mean Matching (PMM), Regression Tree (RT), Random Forest (RF) in their treatments of HPS missing data. The statistical software is SAS PROC MI procedure (version 9.3) and R MICE package (version 2.9). Results: MI has better performance than the ad-hoc methods such as listwise deletion in the detections of potential relationships and the reduction of potential biases in parameter estimations if missing completely at random (MCAR) assumption is not satisfied. PMM, RT and RF have better performance in generating imputed values inside the range of the observed data than BLR and MCMC. The machine learning methods i.e. RT and RF are preferable than the regression methods such as PMM and BLR since the imputed data have quite similar distribution curves and other features (e.g. median, interguatile, skewness of distribution) as the observed data. Implications: It is suggestive to use MI methods to replace those ad-hoc methods in the treatments of missing data, if additional efforts and time are not a problem. The machine learning methods such as RT and RF are more preferable than those relatively arbitrary user-specified regression methods such as PMM and BLR according to our data, but further research are required to approve this indication. R is more flexible than SAS where RT and RF can be applied.
  • Kumar, Ajay Anand (2012)
    Due to next generation of sequencing technologies the amount of public sequence data is exponentially growing, however the rate of sequence annotation is lagging behind. There is need for development of robust computational tools for correct assignment of annotation to protein sequences. Sequence homology based inference of molecular function assignment and subsequent transfer of the annotation is the traditional way of annotating genome sequences. TF-IDF based methodology of mining informative description of high quality annotated sequences can be used to cluster functionally similar and dissimilar protein sequences. The aim of this thesis work is to perform the correlation analysis of TF-IDF methodology with standard methods of Gene Ontology (GO) semantic similarity measures. We have developed and implemented a high-throughput tool named GOParGenPy for effective and faster analysis related to Gene Ontology. It incorporates any Gene Ontology linked annotation file and generates corresponding data matrices, which provides a useful interface for any downstream analysis associated with Gene Ontology across various mathematical platforms. Finally, the correlation evaluation between TF-IDF and standard Gene Ontology semantic similarity methods validates the effectiveness of TF-IDF methodology in order to cluster functionally similar protein sequences.
  • Tauriainen, Juha (2023)
    Software testing is an important part of ensuring software quality. Studies have shown that having more tests results in a lower count of defects. Code coverage is a tool used in software testing to find parts of the software that require further testing and to learn which parts have been tested. Code coverage is generated automatically by the test suites during test execution. Many types of code coverage metrics exist, the most common being line coverage, statement coverage, function coverage, and branch coverage metrics. These four common metrics are usually enough, but there are many specific coverage types for specific purposes, such as condition coverage which tells how many boolean conditions have been evaluated as true and false. Each different metric gives hints on how the codebase is tested. A common consensus amongst practitioners is that code coverage does not correlate much with software quality. The correlation of software quality with code coverage is a historically broadly researched topic, which has importance both in academia and professional practice. This thesis investigates if code coverage correlates with software quality by performing a literature review. Surprising results are derived from the literature review, as most studies included in this thesis point towards code coverage correlating with software quality. This positive correlation comes from 22 studies conducted between 1995-2021, which include Academic and Industrial studies, with studies put into multiple categories, such as Correlation or No correlation based on the key finding, and categories such as Survey studies, Case studies, Open-source studies, based on the study type. Each category has most studies pointing towards a correlation. This finding is in contradiction with the opinions of professional practitioners.