Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Joro, Sauli (Helsingin yliopistoUniversity of HelsinkiHelsingfors universitet, 2004)
  • Tanskanen, Ville (2020)
    Microbial volatile organic compounds are emitted by diverse set of microbial organisms and they are known to cause health hazards when present in indoor air. Early detection of fungal contaminated buildings and species present is crucial to prevent health problems caused by fungal secondary metabolites. This thesis focuses on analysing emission profiles of different insulation materials and fungal cultures, which allows, in further studies, to develop efficient new ways to detect fungi from contaminated buildings. Studied insulation materials consisted of cellulose and glass wool, which were analysed in multiple different conditions. Humidity of atmosphere was varied between 0-10 microliters and temperature was varied between 30°C and 40°C. In fungal emission profile study 24 different cultures were analysed in two different atmospheres, ambient and micro- aerophilic, and in multiple different inoculums. Analysis for both insulation materials and fungal cultures was done using headspace solid phase microextraction Arrow -tool and headspace in tube extraction –tool together with gas chromatography – mass spectrometry. One goal for this thesis was also test suitability of these methods for detection of fungal secondary metabolites. Comprehensive fungal emission profiles were successfully formed and new information from behaviour of insulation materials in different settings was found. In addition, new information about analysis methods and fungal behaviour in different atmospheres was found. Headspace solid phase microextraction Arrow with gas chromatography – mass spectrometry was found to be efficient, sensitive and timesaving method for indoor air study purposes. There were also many potential fungal culture specific biomarker compounds found for further study purposes.
  • Melnik, Elena (2024)
    Mercury is a toxic heavy metal that poses significant risks to human health. In many industrial and occupational settings, employees are at high risk of mercury exposure due to the nature of their work. Consequently, biomonitoring and routine testing mercury levels in the working environment are crucial to ensure occupational health and prevent adverse health effects. This master’s thesis reviews the literature on occupational exposure to mercury and its impact on human well-being. The review focuses on the pathways through which mercury enters the body and the transformations it undergoes. Protective strategies and adopted regulations are also investigated. The analytical methods used for detection of mercury in biological samples, such as cold vapor atomic absorption spectroscopy (CV-AAS) and inductively coupled plasma mass spectrometry (ICP-MS), are explored, including comparison of their efficacy. The primary objective of the experimental part of this research was to validate the use of flow injection mercury system (FIMS) as a method for determining mercury levels in human blood and urine samples. Additionally, ICP-MS was employed for comparative analysis of mercury levels in urine samples. The analytical parameters of FIMS and the potential for selective analysis of two reducing agents, stannous chloride (SnCl2) and sodium tetrahydroborate (NaBH4), were evaluated. This process included calibration, analysis of control materials, optimization of reductant concentration, and calculation of limits of detection (LODs) and quantification (LOQs). Various approaches for the preparation of blood samples were tested. Issues associated with the incompatibility of a particular FIMS setup with the intended goals were identified, and possible solutions were proposed. The study demonstrates practical value, as it clarifies the prospects for using FIMS in the biomonitoring of mercury.
  • Helin, Aku (2018)
    Short-chain aliphatic amines (SCAA) are present in multiple different matrices in the environment at low concentration levels. SCAA are considered to be environmentally relevant compounds due to their role as precursors in the formation of carcinogenic N-nitrosoamines in various matrices and new particle formation in the atmosphere. SCAA are characteristically highly volatile, polar, reactive and basic compounds. Consequently, the quantitative determination of SCAA tends to be rather challenging. In the literature part of this thesis, different analytical methods used for the determination of SCAA in environmental samples are reviewed. The typical approach for the analysis of SCAA has been the use of derivatization techniques. Derivatization converts SCAA into less polar and less volatile form, which enables the use of conventional separation techniques, such as gas chromatography (GC) and high-performance liquid chromatography (HPLC). However, the methods involving derivatization can be quite time consuming, require the usage of excess reagents and are mainly applicable for the analysis of primary and secondary SCAA. To reduce the amount of reagent and solvent consumption, microextraction techniques have been implemented as part of the derivatization methods. For the analysis of free SCAA, mainly ion chromatography (IC) and GC have been used. In recent years, also novel online mass spectrometry techniques have been used for the determination of free SCAA in atmospheric air. In the experimental part of this thesis, a novel solid-phase microextraction (SPME) device called SPME Arrow was used for the extraction of free SCAA. Different SPME Arrow sorbent materials were tested, including commercial and custom sorbents, extraction conditions were optimized and the performance of SPME Arrow was compared to conventional SPME fiber. The developed method was applied for the determination of SCAA in wastewater samples and atmospheric air samples. In general, the performance of the custom sorbent coated SPME Arrow was not adequate due to the deterioration of coating, although the preliminary results indicated possible selectivity towards dimethylamine. Considering the commercial sorbent coated SPME devices, the SPME Arrow was better than the SPME fiber in terms of limit of quantification and performance in real sample analysis. When the SPME Arrow was used for wastewater sample analysis, no matrix interferences were observed, opposite to the results obtained with the SPME fiber. In addition, the SPME Arrow could be used for the determination of SCAA in atmospheric air samples following prior preconcentration by using denuder for sampling.
  • Tsai, Chen-Yeh (2018)
    Sugar and Sugar alcohol are indicative compounds in the environmental aerosol which make them really important. The concentration of sugar and sugar alcohol reveal biogenic and anthropogenic information such as climate, air quality, wood consumption, the activity of plantation and pollution. The conventional analysis methods of sugar and sugar alcohol are reverse phase High Performance Liquid Chromatography–Mass Spectrometry (HPLC-MS/MS), and Gas Chromatography-Mass Spectrometry (GC-MS/MS). However, both of them have some limitations due to the sugar and sugar alcohol aerosol sample which are not easy to analyze. For reverse phase HPLC-MS/MS, the separation of analytes is not satisfied. For the GC-MS/MS, the derivatization process requires extra work and the derivatization compound is not stable. Besides, the matrix effect from the aerosol sample is a significant challenge which needs to be solved. Hence, the hydrophilic interaction chromatography (HILIC) and the Solid Phase Extraction (SPE) are introduced. The retention factors of HILIC column are the hydrophilic partition, the hydrogen bonding, and the electrostatic interactions. Polar stationary phase is used in HILIC mode, and the highly organic solvent is employed in mobile phase. Hence, a stagnant aqueous-rich layer is generated in HILIC mode, which can separate sugars and sugar alcohol efficiently. Furthermore, the interference and the matrix effect are solved by SPE. The development and the optimization of SPE-HILIC-MS/MS method for sugars were done in the experimental part. Eventually, the real environmental aerosol was analyzed by the optimized parameters and methods. The sugars and sugar alcohols were analyzed successfully from atmospheric aerosol samples.
  • Lehtonen, Markus (2019)
    We humans utilise many kinds of chemicals, some of them are safe to use and some of them are dangerous to use. There are chemicals that fall into grey area in the terms of safety. Surfactants are one of them. They are used abundantly and they find their ways to the environment. It is an established fact that surfactants can more or less hinder normal functions of cells, and in the worst cases can cause cell deaths. Despite of this, it is not completely understood what harm surfactants can do to the living organisms in the environment. We live and work in houses that are cleaned with washing chemicals and surfactants. Recently, surfactants were supposed to exist in indoor air, and new studies prove this hypothesis. Literature explains that there might be the possibility that surfactants can adsorb into aerosols. However, analysis methods capable to be used directly for determination of surfactants in aerosol condensate samples are not available. In this M.Sc. thesis a new surfactant determination method was developed with capillary electrophoresis using UV detection and tetraborate complex formation. First surfactant determination methods, found from the literature for environmental samples were reviewed and described in this M.Sc. thesis. Then their suitability for experimental studies was evaluated. Among many options, capillary electrophoresis coupled with ultraviolet detection was selected. The method was developed for determination of didecyldimethylammonium chloride (DDAC) and polyethylene glycol monoalkyl ether (Genapol X-80), which are representatives of cationic and nonionic surfactans, respectively, and represent the surfactants in cleaning chemicals. In the experimental work method development was focused on composition of the electolyte solutions, since they played an important roles in separation and sensitivity of the analytes. First Tricine was selected for electrolyte, because it provided the best responses in the preliminary tests. However, in the later studies it, unfortunately, proved to be unsuitable for the determination of cationic and nonionic surfactants. Therefore, in accordance with published literature tetraborate electrolyte was chosen. As application studies, we demonstrated that the studied surfactants are present in water vapour by analysing seperately DDAC and Genapol X-80 in collected water condensates by laboratory scale piloting tests.The developed method was also applied to authentic samples of indoor water condensates and washing solutions that were collected from two elementary schools with air quality issues. Surfactants were detected in these samples too.
  • Lumme, Erkka (2016)
    Magnetic field has a central role in many dynamical phenomena in the solar corona, and the accurate determination of the coronal magnetic field holds the key to solving a whole range of open research problems in solar physics. In particular, realistic estimates for the magnetic structure of Coronal Mass Ejections (CMEs) enable better understanding of the initiation mechanisms of these eruptions as well as more accurate forecasts of their space weather effects. Due to the lack of direct measurements of the coronal magnetic field the best way to study the field evolution is to use data-driven modelling, in which routinely available photospheric remote sensing measurements are used as a boundary condition. Magnetofrictional method (MFM) stands out from the variety of existing modelling approaches as a particularly promising method. The approach is computationally inexpensive but still has sufficient physical accuracy. The data-based input to the MFM is the photospheric electric field as the photospheric boundary condition. The determination of the photospheric electric field is a challenging inversion problem, in which the electric field is deduced from the available photospheric magnetic field and plasma velocity measurements. This thesis presents and discusses the state-of-the-art electric field inversion methods and the properties of the currently available photospheric measurements. The central outcome of the thesis project is the development and testing of a novel ELECTRICIT software toolkit that processes the photospheric magnetic field data and uses it to invert the photospheric electric field. The main motivation for the toolkit is the coronal modelling using MFM, but the processed magnetic field and electric field data products of the toolkit are usable also in other applications such as force-free extrapolations or high-resolution studies of photospheric evolution. This thesis presents the current state of the ELECTRICIT toolkit as well as the optimization and first tests of its functionality. The tests show that the toolkit can already in its current state produce photospheric electric field estimates to a reasonable accuracy, despite the fact that some of the state-of-the-art electric field inversion methods are yet to be implemented in the toolkit. Moreover, the optimal values of the free parameters in the currently implemented inversion methods are shown to be physically justifiable. The electric field inversions of the toolkit are also used to study other questions. It is shown that the large noise levels of the vector magnetograms in the quiet Sun cause the inverted electric field to be noise-dominated, and thus the magnetic field data from this region should not be considered in the inversion. Another aspect that is studied is the electric field inversion based only on line-of-sight (LOS) magnetograms, which is a considerable option due to much shorter cadence and better availability of the LOS data. The tests show that the inversions based on the LOS data have large errors when compared to the vector data based inversions. However, the results are shown to have reasonable consistency in the horizontal components of the electric field, when the region of interest is near the centre of the solar disk.
  • Ruotsalainen, Sini (2022)
    The literature review of this thesis presents the most utilized sample preparation and analysis methods for determination of trace elements from refinery feedstocks and end products during the last decade. The advantages and disadvantages of used methods and trends are presented. The challenges associated especially on silicon determination are discussed and possible solutions provided by publications are highlighted. The experimental part of this thesis is conducted in Neste’s Research and Development unit in Porvoo. The experimental part includes method development, study of siloxane compounds behavior and method validation for various sample matrices. The method development was performed by introduction of peristaltic pump to inductively coupled plasma- mass spectrometer (ICP-MS) sample introduction for two different methods (ASTM D8110M, NM 534) to replace previously used free aspiration method. The study of behavior of volatile siloxane compounds in different sample matrices including liquified waste plastics (LWP), and determination of these compounds was done with ICP-MS. The studied siloxanes showed great challenges due to their high volatility with the chosen methods. The method (ASTM D8110M, NM534) validation for different sample matrices were also done with ICP-MS. The validated matrices included several renewable matrices such as liquified waste plastics, fatty acids and other liquified waste samples and heavy fossil distillates. Repeatabilities of silicon concentration of sample as such and as spiked in intra- and inter-day, and spiked recoveries played an important role for method validation.
  • Douglas, Regan (2024)
    Mesowear, a method that scores the wear of teeth to determine the amount of abrasive material in the diet, has long been used to understand palaeoecology though the diet of herbivores. Until recently, proboscidean teeth could not be used for these studies. The method of mesowear angle analysis introduced by Saarinen et al. in 2015 has made this possible by measuring the relative angle between the enamel ridges and dentine valleys of the lophs of proboscidean teeth to account for wear. This study compares the average mesowear angles of 428 specimens of Pleistocene Mammuthus to determine geospatial variation across the genus as well as within the species M. primigenius. These results are then corroborated with previous studies of other palaeoecological proxies to ensure they truly reflect a means to determine palaeoecology through proboscidean mesowear. Overall, this study finds that significant geospatial variation and little interspecific variation of Mammuthus proves that mammoths were highly adaptable herbivores capable of surviving in a wide array of the harshest habitats and browsing or grazing habits were not determined by species morphologies, but the environments they inhabited.
  • Tu, Jingyi (2023)
    Atmospheric aerosol particles play a significant role in urban air pollution, and understanding their size distribution is essential for assessing pollution sources and urban aerosol dynamics. In this study, we use a novel method developed by Kontkanen et al. (2020) to determine size-resolved particle number emissions in the particle size range of 3-800 nm at an urban background site and a street canyon site in Helsinki. Our results show overall higher particle number emissions in the street canyon compared to the urban background. On non-NPF event days, the particle number emissions of 3-6 nm particles in the urban background are highest in the noon. The emissions to the size range of 6-30 nm are highest during the morning or afternoon at both sites, indicating traffic is the main particle source in this size range. The emissions of larger particles are relatively low. Seasonal analysis suggests higher emissions during the summer in comparison to the winter which might be linked to higher product of mixing layer height (MLH) and particle number concentration in summer. Further investigations into particle emissions from different wind sectors suggest higher particle emissions from the urban sector than from the road sector in the urban background, contrary to the results for NOx concentrations. More research is needed to better understand the underlying factors. In addition, a comparison between particle number emissions estimated using FMI measurement MLH data and ERA5 model MLH data reveals that FMI data provides a more reliable representation of the MLH in the study area. Overall, the methods show limitations in accurately capturing particle dynamics in Helsinki. Future studies should address these limitations by employing more accurate NPF event classification and refining sector division methods.
  • Halkoaho, Johannes (2022)
    MRS or magnetic resonance spectroscopy is an imagining technique which can be used to gain information about the metabolite concentration within a certain volume of interest. This can be used for example in brain imagining. The brain consists of three main types of tissue: cerebrospinal fluid, white and gray matter. It is important to know the different volume fractions of these tissues as the resolution in MRS is significantly lower than that of magnetic resonance imagining (MRI). The tissues all have different metabolite profiles and in order to get meaningful data the volume fractions need to be taken into account. This information can be gained from the segmentation of an image formed by using MRI. In this work a software tool was created to find these volume fractions with the input of a .rda file that is created by the scanner and Nifti file. The Nifti file is the image formed by using MRI and the .rda file is the manufacturers raw data format for spectroscopy data which has the relevant information about the volumes of interest. The software tool was created using Python and JavaScript programming languages and different functions of FSL. FSL is a comprehensive library of analysis tools used in brain imaging data processing. The steps for the software tool are: determining the coordinates of the volume of interest in FSL voxel coordinates, creating a mask in the correct orientation and location, removing non-brain tissue from the image using FSL’s tool tailored for that purpose (BET), segmenting the image using FSL’s segmenting tool (FAST), registering the mask on the segmented images and calculating the volume fractions. The software tool was tested on imaging data that was obtained at Meilahti Kolmiosairaala for the purpose of the testing. The testing data set included five different spectroscopy volumes from different parts of the brain and a T1 weighted image. The software tool was given the relevant information about the volume of interest in the form of a .rda file and the T1 weighted image in the form of a Nifti file. The software tool then determined the different volume fractions from all of the five volumes of interest. There is variation on the volume fraction of different brain areas within different brains and it is not possible to have an absolute reference value. The results of the test corresponded to the possible volume fractions that can be expected from the volumes in question.
  • Sillanpää, Tom (2019)
    Linear elastic properties of ex vivo porcine lenses were characterized by compression and indentation tests. Compression tests were performed on un-glued lenses (N = 76), the average stiffness 12 ± 3 kPa (± sigma) and thickness 7.0 ± 0.3 mm were measured at 24 - 30 hours post mortem. For glued lenses (N = 70), the average stiffness was 15 ± 4 kPa and thickness 7.2 ± 0.4 mm. The shear modulus was 1.5 ± 0.3 kPa (N = 10) on average measured at 12 hours post mortem with indentation test. Compared to intact lenses, decapsulated lenses were 41% less stiff (N = 5) as measured with the compression test and the shear modulus was 65 % less (N = 10) as determined by indentation.
  • Rydman, Walter (Helsingin yliopistoUniversity of HelsinkiHelsingfors universitet, 2001)
  • Torppa, Tuomo (2021)
    User-centered design (UCD) and agile software development (ASDP) both answer separate answers for issues modern software development projects face, but no direct guidelines on how to implement both in one project exist. Relevant literature offers multiple separate detailed techniques, but the applicability of the techniques is dependant on multiple features of the development team, e.g., personnel and expertise available and the size of the team. In this thesis, we propose a new agile development process model, which is created through evaluating the existing UCD–ASDP combination methods suggested in current literature to find the most suitable application methods to the case this study is applied to. In this new method, the development team is taken to do their daily work physically near to the software’s end- users for a short period of time to make the software team as easily accessible as possible. This method is then applied within an ongoing software project for a two week period in which the team visits two separate locations where end-users have the possibility to meet the development team. This introduced "touring" method ended up offering the development team a valuable under-standing of the skill and involvement level of the end-users they met without causing significant harm to the developer experience. The end-users were pleased with the visits and the method gained support and suggestions for future applications.
  • Kuisma, Ilkka (2019)
    Context: The advent of Docker containers in 2013 provided developers with a way of bundling code and its dependencies into containers that run identically on any Docker Engine, effectively mitigating platform and dependency related issues. In recent years an interesting trend has emerged of developers attempting to leverage the benefits provided by the Docker container platform in their development environments. Objective: In this thesis we chart the motivations behind the move towards Containerized Development Environments (CDE) and seek to categorize claims made about benefits and challenges experienced by developers after their adoption. The goal of this thesis is to establish the current state of the trend and lay the groundwork for future research. Methods: The study is structured into three parts. In the first part we conduct a systematic review of gray literature, using 27 sources acquired from three different websites. The sources were extracted for relevant quotes that were used for creating a set of higher level concepts for expressed motivations, benefits, and challenges. The second part of the study is a qualitative single-case study where we conduct semi-structured theme interviews with all members of a small-sized software development team that had recently taken a containerized development environment into use. The case team was purposefully selected for its practical relevance as well as convenient access to its members for data collection. In the last part of the study we compare the transcribed interview data against the set of concepts formed in the literature review. Results: Cross-environment consistency and a simplified initial setup driven by a desire to increase developer happiness and productivity were commonly expressed motivations that were also experienced in practice. Decreased performance, required knowledge of Docker, and difficulties in the technical implementation of CDE’s were mentioned as primary challenges. Many developers experienced additional benefits of using the Docker platform for infrastructure provisioning and shared configuration management. The case team additionally used the CDE as a platform for implementing end to end testing, and viewed the correct type of team and management as necessary preconditions for its successful adoption. Conclusions: CDE’s offer many valuable benefits that come at a cost and teams have to weigh the trade-off between consistency and performance, and whether the investment of development resources to its implementation is warranted. The use of the Docker container platform as an infrastructure package manager could be considered a game-changer, enabling development teams to provision new services like databases, load-balancers and message brokers with just a few lines of code. The case study reports one account of an improved onboarding experience and points towards an area for future research. CDE’s would appear to be a good fit for microservice oriented teams that seek to foster a DevOps culture, as indicated by the experience of the case team. The implementation of CDE’s is a non-trivial challenge that requires expertise from the teams and developers using them. Additionally, the case team’s novel use of containers for testing appears to be an interesting research topic in its own right. ACM Computing Classification System (CCS): Software and its engineering →Software creation and management →Software development techniques
  • Leppämäki, Tatu (2022)
    Ever more data is available and shared through the internet. The big data masses often have a spatial dimension and can take many forms, one of which are digital texts, such as articles or social media posts. The geospatial links in these texts are made through place names, also called toponyms, but traditional GIS methods are unable to deal with the fuzzy linguistic information. This creates the need to transform the linguistic location information to an explicit coordinate form. Several geoparsers have been developed to recognize and locate toponyms in free-form texts: the task of these systems is to be a reliable source of location information. Geoparsers have been applied to topics ranging from disaster management to literary studies. Major language of study in geoparser research has been English and geoparsers tend to be language-specific, which threatens to leave the experiences provided by studying and expressed in smaller languages unexplored. This thesis seeks to answer three research questions related to geoparsing: What are the most advanced geoparsing methods? What linguistic and geographical features complicate this multi-faceted problem? And how to evaluate the reliability and usability of geoparsers? The major contributions of this work are an open-source geoparser for Finnish texts, Finger, and two test datasets, or corpora, for testing Finnish geoparsers. One of the datasets consists of tweets and the other of news articles. All of these resources, including the relevant code for acquiring the test data and evaluating the geoparser, are shared openly. Geoparsing can be divided into two sub-tasks: recognizing toponyms amid text flows and resolving them to the correct coordinate location. Both tasks have seen a recent turn to deep learning methods and models, where the input texts are encoded as, for example, word embeddings. Geoparsers are evaluated against gold standard datasets where toponyms and their coordinates are marked. Performance is measured on equivalence and distance-based metrics for toponym recognition and resolution respectively. Finger uses a toponym recognition classifier built on a Finnish BERT model and a simple gazetteer query to resolve the toponyms to coordinate points. The program outputs structured geodata, with input texts and the recognized toponyms and coordinate locations. While the datasets represent different text types in terms of formality and topics, there is little difference in performance when evaluating Finger against them. The overall performance is comparable to the performance of geoparsers of English texts. Error analysis reveals multiple error sources, caused either by the inherent ambiguousness of the studied language and the geographical world or are caused by the processing itself, for example by the lemmatizer. Finger can be improved in multiple ways, such as refining how it analyzes texts and creating more comprehensive evaluation datasets. Similarly, the geoparsing task should move towards more complex linguistic and geographical descriptions than just toponyms and coordinate points. Finger is not, in its current state, a ready source of geodata. However, the system has potential to be the first step for geoparsers for Finnish and it can be a steppingstone for future applied research.
  • Kauria, Laura (2016)
    The purpose of this Master's thesis was to create a new model for screening possible optimal locations for utility-scale solar power plants (i.e. solar parks, solar power stations and solar farms) in larger city areas. The model can be used as a part of a decision making when examining site potentiality in a particular city of interest. The model includes forecasts for the year 2040. The main questions of the thesis are as follows: 1) What are the main criteria for a good location for a utility-scale solar power plant and 2) how to build a geographic information system (GIS) model for solar power plant location optimization? Solar power plants provide an alternative to producing renewable energy due to the enormous distribution potential of solar energy. A disadvantage of utility-scale solar energy production is the fact that it requires larger areas of land than the more traditional power plants. Converting land to solar farms might threaten both rich biodiversity and food production, which is why these factors are included in the model. In this study, methods from the field of geographic information science were applied to quantitative location optimization. Spatial analytics and geostatistics, which are effective tools to narrow down optimal geographical areas, were applied for finding optimal locations for solar power plants, especially in larger city regions. The model was developed by an iterative approach. The resulting model was tested in Harare (Zimbabwe), Denver (United States) and Helsinki (Finland). The optimization model is based on three raster datasets that are integrated through overlay analysis. The first one contains spatial solar radiation estimates for each month separately and is derived from a digital elevation model and monthly cloud cover estimates. The resulting radiation estimates are the core factor in estimating energy production. The second and the third dataset are two separate global datasets, which were used to deal with land use pressure issues. The first of these is a hierarchically classified land systems model based on land cover and intensiveness of agriculture and livestock, while the second is a nature conservation prioritization dataset, which shows the most important areas for conserving threatened vertebrate species. The integration of these datasets aims to facilitate smart and responsible land use planning and sustainability while providing information to support profitable investments. The model is based on tools implemented in the ArcGIS 10 software. The Area solar radiation tool was used for calculating the global and direct radiation for each month separately on clear sky conditions. An estimate of the monthly cloud coverage was calculated from 30 years' empirical cloud data using a probability mapping technique. To produce the actual radiation estimates, the clear sky radiation estimates were improved using the cloud coverage estimates. Reclassifying the values from land use datasets enabled the exclusion of unsuitable areas from the output maps. Eventually, the integration and visualization of the datasets result in output maps for each month within a year. The maps are the end product of the model and they can be used to focus decision making on the most suitable areas for utility-scale solar power plants. The model showed that the proportion of possible suitable areas was 40 % in Harare (original study area 40 000 km2), 55 % in Denver (90 000 km2) and 30 % in Helsinki (10 000 km2). This model did not exclude areas with low solar radiation potential. In Harare, the yearly variation in maximum radiation was low (100 kWh/m2/month), whereas in Denver it was 2.5-fold and in Helsinki 1.5-fold. The solar radiation variations within a single city were notable in Denver and Harare, but not in Helsinki. It is important to calculate radiation estimates using a digital elevation model and cloud coverage estimates rather than estimating the level of radiation in the atmosphere. This spatial information can be used for directing further investigations on potential sites for solar power plants. These further investigations could include land ownership, public policies and investment attractiveness.
  • Snellman, Mikael (2018)
    Today many of the most popular service provides such as Netflix, LinkedIn, Amazon and others compose their applications from a group of individual services. These providers need to deploy new changes and features continuously without any downtime in the application and scale individual parts of the system on demand. To address these needs the usage of microservice architecture has grown in popularity in recent years. In microservice architecture, the application is a collection of services which are managed, developed and deployed independently. This independence of services enables the microservices to be polyglot when needed, meaning that the developers can choose the technology stack for each microservice individually depending on the nature of the microservice. This independent and polyglot nature of microservices can make developing a single service easier, but it also introduces significant operations overhead when not taken into account when adopting the microservice architecture. These overheads include the need for extensive DevOps, monitoring, infrastructure and preparation for distributed system fallacies. Many cloud-native and microservice based applications suffer from outages even with thorough unit and integration tests applied. This can be because distributed cloud environments are prone to fail in node or even regional level, which cause unexpected behavior in the system when not prepared for. The applications ability to recover and maintain functionality at an acceptable level in these unexpected faults, also known as resilience, should also be tested systematically. In this thesis we give a introduction to the microservice architecture. We inspect an industry case where a leading banking company suffered from issues regarding resiliency. We examine the challenges regarding resilience testing microservice architecture based applications. We compose a small microservice application which we use to study the defensive design patterns and tools and methods available to test microservice architecture resiliency.
  • Seppälä, Eemi (2019)
    Methane(CH4) is a powerful greenhouse gas and even though the CH4 concentrations in the atmosphere have been increasing rapidly since the year 1750, there still remains large uncertainties in the individual source terms to the global CH4 budget. Measuring the isotopic fractions from various CH4 sources should lead to new knowledge on the processes involving CH4 formation and emission pathways. Nowadays stable isotope measurements for various CH4 sources are quite routinely made, but radiocarbon measurements have for long been too expensive and time consuming. For this reason a new CH4 sampling system for radiocarbon measurements was developed at the Laboratory of Chronology of University of Helsinki. The system allows sampling directly from the atmosphere or from different environmental sources using chambers. To demonstrate the functionality of the system it was tested and optimized in various laboratory experiments and in the field. The laboratory measurements showed that before combustion of CH4 to carbon dioxide(CO2), ambient carbon monoxide(CO) and CO2 can be removed from the sample gas by a flow for more than 10 hours with a flowrate of 1 l/min. After the CO and CO2 removal the CH4 in the sample gas is combusted to CO2. The combustion efficiency for CH4 was 100% with a flowrate of 0.5 l/min. After CH4 is combusted to CO2 it is then collected to molecular sieves and can be later on analyzed using accelerator mass spectrometer. The laboratory measurements, however, showed that due to adsorption of nitrogen(N2) to the molecular sieves, the 1g of molecular sieve material used in molecular sample sieve tubes was not sufficient for low concentration samples where the sampling times are very long. In the field, CH4 was collected from the atmospheric ambient air at Hyytiälä SMEAR II station, Juupajoki, Finland, and from tree and soil chambers. The radiocarbon content of the atmospheric CH4 was 102.27 ± 0.02 percent Modern Carbon (pMC) and 101.40 ± 0.02 pMC. These values were much lower than the expected values, indicating a large spatial and temporal variability. The CH4 collected from chambers closed around tree-stems had a radiocarbon content had of 113.60 ± 0.37 pMC, which was slightly higher than the 108.71 ± 0.37 pMC measured from soil chambers located in the nearby Siikaneva peatland. This indicated that a larger amount of CH4 emitted from peatland surface was recently fixed near the soil surface and a larger amount of the CH4 emitted from tree-stem surface was from older origin transported via roots from the deeper depths of the soil. There is, however, a possibility that the lower radiocarbon content from the peatland surface emitted CH4 was due to a significant contribution from old CH4 fixed before bomb effect, and which is diffused from deeper depths of the soil. This would explain the results from the autumn campaign where the radiocarbon contents were 91.84 ± 0.03 during nighttime and 104.26 ± 0.03 pMC during daytime. These results also indicated that during the daytime more of the emitted CH4 is fixed near the surface of the peatland soil. One additional CH4 sample was collected in January 2019 from the atmospheric ambient air at Kumpula, Helsinki, Finland using a significantly larger molecular sample sieve. This sample had a radiocarbon content of 52.40 ± 0.21 pMC. The old carbon in the sample originated from a fossil methane used in earlier laboratory experiments and indicated that the regeneration process for the larger sample sieve was incomplete. Overall the system functions very well, while collecting samples from environmental chambers, as the CH4 concentrations are left to build-up before collecting the sample. For atmospheric samples, for which the sampling times are higher, the sample sieve size and the regeneration time and temperature will have to be further investigated. In the future, more measurements of the radiocarbon content for individual CH4 sources are needed to provide better knowledge on the CH4 pathways. This portable system allows an efficient way to collect CH4 samples for radiocarbon analyzes from various locations.
  • Eklund, Tommy (2013)
    Large screens, interactive or not, are becoming a common sight at shopping centers and other public places. These screens are used to advertise or share information interactively. Combined with the omnipresence of smartphones this gives rise for a unique opportunity to join these two interfaces and to combine their strengths and complement their weaknesses. Smartphones are very mobile thanks to their small size and can access information virtually from anywhere, but suffer from overflow of information. Users have too many applications and web sites to search relevant information to find what they want or need in a timely fashion. On the other hand, public screens are too large to provide information everywhere or in a personalized way, but they do often have the information you need, when and where you need it. Thus large screens provide an ideal place for users to select content onto their smartphones. Large screens also have the advantage of screen size and research has indicated that using a second screen with small handheld devices can improve the user experience. This thesis undertook design and development of a prototype Android application for existing large interactive public screen. The initial goal was to study the different aspects of personal mobile devices coupled with large public screens. This large screen interface is also under development as a ubiquitous system and the mobile application was designed to be part of this system. Thus the design of the mobile application needed to be consistent with the public screen. During the development of this application it was observed that the small mobile screen could not support the content or interactions designed for a much larger screen because of its small size. As a result this thesis focuses on developing a prototype that further research could draw upon. This lead to a study of small screen graph data visualization and previous research on mobile applications working together with large public screens. This thesis presents a novel approach for displaying graph data designed for large screens on a small mobile screen. This work also discusses many challenges and questions related to large screen interaction with mobile device that rose during the development of the prototype. An evaluation was conducted to gather both quantitative and qualitative data on the interface design and the consistency with the large screen interface to further analyze the resulting prototype. The most important findings in this work are the problems encountered and questions raised during the development of the mobile application prototype. This thesis provides several suggestions for future research using the application, the ubiquitous system and the large screen interface. The study of related work and prototype development also lead to suggestion of design guidelines for this type of applications. The evaluation data also suggests that the final mobile application design is both consistent with and performs better than a faithful implementation of the visuals and interaction model of the original large screen interface.