Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Saarela, Lasse (2020)
    The aim of this study is to provide analyse the financial commitments of developed country parties under the Global Climate Change Regime (GCCR) and how these commitments promote the objectives of the regime. In the first part of this research I examined GCCR and the regulation process that takes place under it. GCCR is a specialized regime that operate semi-autonomously and as such contributes to the fragmentation of international law. This part is followed a brief overview of the evolution of the financial commitments within the GCCR. The I illustrate the significant change in the level of operationalisation brought on by the Copenhagen Accord and subsequent COP decisions. The analysis itself is done in two parts. The first half of utilizes the legal dogmatic research method to examine the individual provision on providing, mobilizing and reporting climate finance. The second part of also utilizes the legal dogmatic. The analysis is concentrated on the objectives of GCCR and the role of CF has in promoting these objectives. In the final part of this research I explore the possible options to enhance the effectiveness with principles of international law and customary law. The result of the analysis is that there has been much development in operationalization of the financial commitments after the Copenhagen Accord. The operationalization is however incomplete. This is due to several factors. Firstly, term climate finance lacks agreed definition which makes measuring the financial flows accurately difficult. Secondly, the obligation to provide climate finance is a collective procedural obligation which cannot be transformed to obligations for individual states before burden sharing agreement is reached. The principles of international law can be used to guide the parties in deciding what is fair and equitable burden sharing, but they alone cannot be used to remedy the absence of agreed definition and burden sharing agreement. The commitments on climate finance are ambiguous and the sums of remain arbitrary. Compliance assessment on individual state on its commitments is nearly impossible. The current climate finance framework of the GCCR is disconnected from the objectives of the regime does little to promote them.
  • Torppa, Tuomo (2021)
    User-centered design (UCD) and agile software development (ASDP) both answer separate answers for issues modern software development projects face, but no direct guidelines on how to implement both in one project exist. Relevant literature offers multiple separate detailed techniques, but the applicability of the techniques is dependant on multiple features of the development team, e.g., personnel and expertise available and the size of the team. In this thesis, we propose a new agile development process model, which is created through evaluating the existing UCD–ASDP combination methods suggested in current literature to find the most suitable application methods to the case this study is applied to. In this new method, the development team is taken to do their daily work physically near to the software’s end- users for a short period of time to make the software team as easily accessible as possible. This method is then applied within an ongoing software project for a two week period in which the team visits two separate locations where end-users have the possibility to meet the development team. This introduced "touring" method ended up offering the development team a valuable under-standing of the skill and involvement level of the end-users they met without causing significant harm to the developer experience. The end-users were pleased with the visits and the method gained support and suggestions for future applications.
  • Kuisma, Ilkka (2019)
    Context: The advent of Docker containers in 2013 provided developers with a way of bundling code and its dependencies into containers that run identically on any Docker Engine, effectively mitigating platform and dependency related issues. In recent years an interesting trend has emerged of developers attempting to leverage the benefits provided by the Docker container platform in their development environments. Objective: In this thesis we chart the motivations behind the move towards Containerized Development Environments (CDE) and seek to categorize claims made about benefits and challenges experienced by developers after their adoption. The goal of this thesis is to establish the current state of the trend and lay the groundwork for future research. Methods: The study is structured into three parts. In the first part we conduct a systematic review of gray literature, using 27 sources acquired from three different websites. The sources were extracted for relevant quotes that were used for creating a set of higher level concepts for expressed motivations, benefits, and challenges. The second part of the study is a qualitative single-case study where we conduct semi-structured theme interviews with all members of a small-sized software development team that had recently taken a containerized development environment into use. The case team was purposefully selected for its practical relevance as well as convenient access to its members for data collection. In the last part of the study we compare the transcribed interview data against the set of concepts formed in the literature review. Results: Cross-environment consistency and a simplified initial setup driven by a desire to increase developer happiness and productivity were commonly expressed motivations that were also experienced in practice. Decreased performance, required knowledge of Docker, and difficulties in the technical implementation of CDE’s were mentioned as primary challenges. Many developers experienced additional benefits of using the Docker platform for infrastructure provisioning and shared configuration management. The case team additionally used the CDE as a platform for implementing end to end testing, and viewed the correct type of team and management as necessary preconditions for its successful adoption. Conclusions: CDE’s offer many valuable benefits that come at a cost and teams have to weigh the trade-off between consistency and performance, and whether the investment of development resources to its implementation is warranted. The use of the Docker container platform as an infrastructure package manager could be considered a game-changer, enabling development teams to provision new services like databases, load-balancers and message brokers with just a few lines of code. The case study reports one account of an improved onboarding experience and points towards an area for future research. CDE’s would appear to be a good fit for microservice oriented teams that seek to foster a DevOps culture, as indicated by the experience of the case team. The implementation of CDE’s is a non-trivial challenge that requires expertise from the teams and developers using them. Additionally, the case team’s novel use of containers for testing appears to be an interesting research topic in its own right. ACM Computing Classification System (CCS): Software and its engineering →Software creation and management →Software development techniques
  • Paananen, Saana (2022)
    This master thesis aimed to develop a calibration model of the whipping cream for FTIR spectrophotometer MilkoScanTM FT3 and identify the effects of disruptive factors on whipping cream measurement. In the previous calibration model of whipping cream, there could be improvements in the level of the results, especially for lactose. The experimental part of this research included the development of a calibration model, calibration of lactose, total solids assay, and calculation of measurement uncertainty. The calibration model was developed utilizing the statistical methods PCA and PLS in the FossCalibrator program. Lactose calibration was carried out with the LactoSens method, utilizing whipping cream and α-D-lactose, and the results from the reference laboratory were used as reference. The accuracy and functionality of the calibration of total solids wanted to verify, which was why the experimental part included the total solid assay. Total solids were determined by a gravimetric method, and the results were compared with those of the reference laboratory. The measurement uncertainty of the calibration model was calculated with the results of the whipping cream sample reproducibility, repeatability, and accuracy. The new calibration model of whipping cream was verified utilizing validation samples, and the check results were at the expected level. The result of lactose calibration was considerably more accurate than the previous model. The total solids result from the gravimetric method varied slightly with the FT3 results, which, however, did not affect practically. The measurement uncertainty was relatively good, but the calculation will be improved as the number of reference results increases. The main subject of this research was achieved, i.e., the development of the calibration model was successful. The research on the disruptive factors was less than initially intended.
  • Parkkinen, Ilmari (2018)
    MicroRNAs are ~22 nucleotide long RNA strands which regulate gene expression by binding to the 3’UTRs of messenger RNAs. MicroRNAs are predicted to regulate about a half of all protein-coding genes in the human genome thus affecting many cellular processes. One crucial part of microRNA biogenesis is the cleaving of pre-miRNA strands into mature microRNAs by the type III RNase enzyme, Dicer. Dicer has been shown to be downregulated due to aging and in many disease states. Particularly central nervous system disorders are linked to dysregulated microRNA processing. According to the latest studies, Dicer is crucial to the survival of dopaminergic neurons and conditional Dicer knockout mice show severe nigrostriatal dopaminergic cell loss, which is a hallmark of Parkinson’s disease. By activating Dicer with a small-molecule drug, enoxacin, the survival of dopaminergic cells exposed to stress is significantly improved. However, enoxacin, which is a fluoroquinolone antibiotic, activates Dicer only at high concentrations (10-100 μM) and is polypharmacological, which may cause detrimental side effects. Therefore, enoxacin is not a suitable drug candidate for Dicer deficiencies and better Dicer-activating drug candidates are needed. The aim of this work was to develop a cell-based fluorescent assay to screen for Dicer-activating compounds. Assays which measure Dicer activity have already been developed, but they have some pitfalls which don’t make them optimal to use for high-throughput screening of Dicer-activating compounds. Some are cell-free enzyme-based assays and thus neglect Dicer in its native context. The RNA to be processed by Dicer does not represent a common mammalian RNA type. Most assays do not have internal normalizing factors, such as a second reporter protein to account for e.g. cell death, or the analysis method is not feasible for high-throughput screening data. Considering these disadvantages, the study started by designing a reporter plasmid in silico. The plasmid expresses two fluorescent proteins, mCherry (red) and EGFP (green), and a mCherry transcripttargeting siRNA implemented into a pre-miR155 backbone which is processed by Dicer. Thus, measuring the ratios of red and green fluorescence intensities will give an indication on Dicer activity. The plasmid also has additional regulatory elements for stabilizing expression levels. The plasmid was then produced by molecular cloning methods and its functionality was tested with Dicer-modulating compounds. The assay was optimised by testing it in different cell lines and varying assay parameters, and stable cell lines were created to make large-scale screening more convenient. Finally, a small-scale screen was done with ten pharmacologically active compounds. Transiently transfected, in Chinese hamster ovarian cells, mCherry silencing was too efficient for reliable detection of improvement in silencing efficiency due to floor effect. With an inducible, Tet-On, system in FLP-IN 293 T-Rex cells, the expression could be controlled by administering doxycycline and the improvement in silencing was quantifiable. The assay seemed to be functional after 72 hours and 120 hours of incubation using enoxacin (100 μM) as a positive control. However, the screening found no compounds to significantly reduce mCherry/EGFP fluorescence ratio and, additionally, the effect of enoxacin was abolished. Therefore, a more thorough analysis on the effects of enoxacin was done and, although statistically significant, enoxacin was only marginally effective in reducing mCherry/EGFP fluorescence ratio after 72 hours of treatment. It should be noted from the small-scale screening that metformin and BDNF, compounds previously shown to elevate Dicer levels, showed similar effects to enoxacin. The quality of the assay in terms of high-throughput screening was determined by calculating Zfactors and coefficients of variations for the experiments, which showed that the variability of the assay was acceptable, but the differences between controls was not large enough for reliable screening. In conclusion, the effects of metformin and BDNF should be further studied and regarding the assay, more optimisation is needed for large-scale, high-throughput, screening to be done with minimal resources.
  • Äärilä, Johannes (2013)
    This thesis investigates the methods and principles used to calculate timberland return indices. By studying these existing indices and new possible methods, the study contributes to the accuracy and methodology of timberland return measurement. Attention towards timberland investing has been increasing among institutional investors, while at the same time timberland return indices are also being utilized by policy makers as supporting indicators for policy decisions. The possibility to measure timberlands returns exactly is understandably of great interest and a desirable goal. Previous literature does discuss the general aspects of timberland return measurement and index calculation, but very little about the actual index number theory and its implications to the timberland return measurement. For this reason, there exist some issues regarding the currently available indices that make them prone to bias and otherwise unfavorable and inappropriate in the context of index number theory. The four existing indices considered in this thesis are the NCREIF Timberland Index, John Hancock Timber Index, Timberland Performance Index and the index formula utilized by the Finnish Forest Research Institute. The results of the examination confirm the benefits of fully-regulated forest in index construction, as it offers a stable and comparable base for an index. Also the effects and trade-offs of price selection, interest rate and index frequency are presented and discussed in detail. The utilization of net present value in index construction, instead of the liquidation value, is a new approach utilized in this thesis and the issues regarding its use in index calculation are considered and assessed. The key finding of this thesis is that the index formula used by the Finnish Forest Research Institute suffers from a weighting problem and it is not consistent in aggregation. To overcome the index number problems present in the existing indices, the pseudo-superlative Montgomery-Vartia index formula is applied into timberland returns. It is shown that the index is consistent in aggregation and that it approximates closely the desirable superlative indices. As a result, this thesis advocates the use of Montgomery-Vartia index. It is more appropriate formula for timberland return measurement than the currently used or the other available index formulas are, and its implementation should therefore be considered.
  • Dirks, Anna (2021)
    Antibiotic resistance is an increasing, terrible threat to human health, leading to a growing need for alternative therapies. Phage therapy, using bacterial viruses to fight infections, is a promising alternative to antibiotic therapy. However, several obstacles need to be overcome. Regrettably, phage therapy remains inaccessible to many laboratories worldwide due to the need for expensive machinery to establish sensitivity of bacteria to phage. Moreover, shipping phages between laboratories remains challenging. In the current study a device-free bacteriophage typing PhagoGramAssay was developed. In the assay bacteria suspended in soft agar were poured onto a 60-well Terasaki plate containing phages suspended in fibrillated nanocellulose separated from the bacteria by a seal. Phages were released into the bacterial agar layer by puncturing the seal to test for sensitivity observable with the naked eye. Contrast between lysis zone and bacterial lawn was enhanced using 2,3,5-triphenyltetrazolium chloride. Optimized parameters included the amount of bacteria and phage added, volume of phage suspension, agar percentage and thickness and puncturing tool size. In addition, a prototype of such a puncturing tool was developed. The optimized PhagoGramAssay was tested using several bacteria-phage combinations. Moreover, infectivity and stability of phages stored on Terasaki plates was followed over the course of 4 weeks. The optimal bacterial amount added was found to be a 1:300 dilution in soft agar taken from a OD600 = 1 culture. Phage suspensions used in the assay were found to need to have a titer of at least 108 PFU/ml in the original lysate, with 8 µl of 1:10 dilution in fibrillated nanocellulose present in the wells. Optimal agar conditions were found to be 0.4% – 0.5% (w/v) with a thickness of 2 mm – 3 mm. The optimal puncturing tool shape was found to be a slit with a thickness of 0.5 mm. When using these conditions sensitivity could be established for a vast number of bacteria-phage combinations. All phages remained stable and infective over the course of 4 weeks . The newly developed PhagoGramAssay can be further developed into a kit-like phage typing assay that would enable laboratories to test for sensitivity on site whenever a multi-drug resistant bacterial strain is isolated from a patient sample, effectively making phage therapy accessible to laboratories that cannot afford expensive machinery. Additionally, the use of fibrillated nanocellulose should enable laboratories to exchange phages. The final form of such a kit, however, is dependent on manufacturers and investors and may need to be adjusted accordingly.
  • Leppämäki, Tatu (2022)
    Ever more data is available and shared through the internet. The big data masses often have a spatial dimension and can take many forms, one of which are digital texts, such as articles or social media posts. The geospatial links in these texts are made through place names, also called toponyms, but traditional GIS methods are unable to deal with the fuzzy linguistic information. This creates the need to transform the linguistic location information to an explicit coordinate form. Several geoparsers have been developed to recognize and locate toponyms in free-form texts: the task of these systems is to be a reliable source of location information. Geoparsers have been applied to topics ranging from disaster management to literary studies. Major language of study in geoparser research has been English and geoparsers tend to be language-specific, which threatens to leave the experiences provided by studying and expressed in smaller languages unexplored. This thesis seeks to answer three research questions related to geoparsing: What are the most advanced geoparsing methods? What linguistic and geographical features complicate this multi-faceted problem? And how to evaluate the reliability and usability of geoparsers? The major contributions of this work are an open-source geoparser for Finnish texts, Finger, and two test datasets, or corpora, for testing Finnish geoparsers. One of the datasets consists of tweets and the other of news articles. All of these resources, including the relevant code for acquiring the test data and evaluating the geoparser, are shared openly. Geoparsing can be divided into two sub-tasks: recognizing toponyms amid text flows and resolving them to the correct coordinate location. Both tasks have seen a recent turn to deep learning methods and models, where the input texts are encoded as, for example, word embeddings. Geoparsers are evaluated against gold standard datasets where toponyms and their coordinates are marked. Performance is measured on equivalence and distance-based metrics for toponym recognition and resolution respectively. Finger uses a toponym recognition classifier built on a Finnish BERT model and a simple gazetteer query to resolve the toponyms to coordinate points. The program outputs structured geodata, with input texts and the recognized toponyms and coordinate locations. While the datasets represent different text types in terms of formality and topics, there is little difference in performance when evaluating Finger against them. The overall performance is comparable to the performance of geoparsers of English texts. Error analysis reveals multiple error sources, caused either by the inherent ambiguousness of the studied language and the geographical world or are caused by the processing itself, for example by the lemmatizer. Finger can be improved in multiple ways, such as refining how it analyzes texts and creating more comprehensive evaluation datasets. Similarly, the geoparsing task should move towards more complex linguistic and geographical descriptions than just toponyms and coordinate points. Finger is not, in its current state, a ready source of geodata. However, the system has potential to be the first step for geoparsers for Finnish and it can be a steppingstone for future applied research.
  • Kauria, Laura (2016)
    The purpose of this Master's thesis was to create a new model for screening possible optimal locations for utility-scale solar power plants (i.e. solar parks, solar power stations and solar farms) in larger city areas. The model can be used as a part of a decision making when examining site potentiality in a particular city of interest. The model includes forecasts for the year 2040. The main questions of the thesis are as follows: 1) What are the main criteria for a good location for a utility-scale solar power plant and 2) how to build a geographic information system (GIS) model for solar power plant location optimization? Solar power plants provide an alternative to producing renewable energy due to the enormous distribution potential of solar energy. A disadvantage of utility-scale solar energy production is the fact that it requires larger areas of land than the more traditional power plants. Converting land to solar farms might threaten both rich biodiversity and food production, which is why these factors are included in the model. In this study, methods from the field of geographic information science were applied to quantitative location optimization. Spatial analytics and geostatistics, which are effective tools to narrow down optimal geographical areas, were applied for finding optimal locations for solar power plants, especially in larger city regions. The model was developed by an iterative approach. The resulting model was tested in Harare (Zimbabwe), Denver (United States) and Helsinki (Finland). The optimization model is based on three raster datasets that are integrated through overlay analysis. The first one contains spatial solar radiation estimates for each month separately and is derived from a digital elevation model and monthly cloud cover estimates. The resulting radiation estimates are the core factor in estimating energy production. The second and the third dataset are two separate global datasets, which were used to deal with land use pressure issues. The first of these is a hierarchically classified land systems model based on land cover and intensiveness of agriculture and livestock, while the second is a nature conservation prioritization dataset, which shows the most important areas for conserving threatened vertebrate species. The integration of these datasets aims to facilitate smart and responsible land use planning and sustainability while providing information to support profitable investments. The model is based on tools implemented in the ArcGIS 10 software. The Area solar radiation tool was used for calculating the global and direct radiation for each month separately on clear sky conditions. An estimate of the monthly cloud coverage was calculated from 30 years' empirical cloud data using a probability mapping technique. To produce the actual radiation estimates, the clear sky radiation estimates were improved using the cloud coverage estimates. Reclassifying the values from land use datasets enabled the exclusion of unsuitable areas from the output maps. Eventually, the integration and visualization of the datasets result in output maps for each month within a year. The maps are the end product of the model and they can be used to focus decision making on the most suitable areas for utility-scale solar power plants. The model showed that the proportion of possible suitable areas was 40 % in Harare (original study area 40 000 km2), 55 % in Denver (90 000 km2) and 30 % in Helsinki (10 000 km2). This model did not exclude areas with low solar radiation potential. In Harare, the yearly variation in maximum radiation was low (100 kWh/m2/month), whereas in Denver it was 2.5-fold and in Helsinki 1.5-fold. The solar radiation variations within a single city were notable in Denver and Harare, but not in Helsinki. It is important to calculate radiation estimates using a digital elevation model and cloud coverage estimates rather than estimating the level of radiation in the atmosphere. This spatial information can be used for directing further investigations on potential sites for solar power plants. These further investigations could include land ownership, public policies and investment attractiveness.
  • Oksanen, Marja (2022)
    Alcohol policy, alcohol legislation and alcohol consumption have a long history in Finland. For long, Finnish people have been seen as binge drinkers and harms caused by alcohol have been a true problem and burden for the public health. Still, alcohol is present in our everyday life. Therefore, there needs to be different ways to limit the consumption and related harms. This is where alcohol policy comes into action; its long history and the quite restrictive methods used in Finland have been generated for the welfare of the Finnish people and public health. Alcohol policy entails many aspects to consider. Discussing alcohol policy without taking into account such factors as self-regulation, alcohol culture and consumer practices leaves the discussion unilateral. The aim of this thesis is to identify what kind of policy measures were targeted when discussing the alcohol Act 2018 in the plenary session before voting about the Act. Also, the aim is to identify whether there was prejudice or stigma present in the discussion and how was alcohol culture taken into consideration when discussing the new act. In this study the focus is on alcohol culture and politics and therefore the research design of this study is qualitative. The material used in this research is a transcript of a plenary session held the day before the new Act was voted on. The method of analysis used to analyse the research material is directed content analysis. Still, it should be recognized, that this research is strongly related to rhetoric analyses as well. The debate in the plenary session was intense and strongly emphasized by personal opinions and arguments. The discussion many times shifted away from health policy to industrial policy which leaves open a question about the justification of alcohol policy in general. Stigma was present in the discussion and alcohol culture was referred to in a negative and positive sense. The nine target areas of alcohol policy were addressed, pricing and availability being emphasized more than other. The topic in general is part of a wider societal discussion and should be addressed from a wider perspective than alcohol policy alone.
  • Snellman, Mikael (2018)
    Today many of the most popular service provides such as Netflix, LinkedIn, Amazon and others compose their applications from a group of individual services. These providers need to deploy new changes and features continuously without any downtime in the application and scale individual parts of the system on demand. To address these needs the usage of microservice architecture has grown in popularity in recent years. In microservice architecture, the application is a collection of services which are managed, developed and deployed independently. This independence of services enables the microservices to be polyglot when needed, meaning that the developers can choose the technology stack for each microservice individually depending on the nature of the microservice. This independent and polyglot nature of microservices can make developing a single service easier, but it also introduces significant operations overhead when not taken into account when adopting the microservice architecture. These overheads include the need for extensive DevOps, monitoring, infrastructure and preparation for distributed system fallacies. Many cloud-native and microservice based applications suffer from outages even with thorough unit and integration tests applied. This can be because distributed cloud environments are prone to fail in node or even regional level, which cause unexpected behavior in the system when not prepared for. The applications ability to recover and maintain functionality at an acceptable level in these unexpected faults, also known as resilience, should also be tested systematically. In this thesis we give a introduction to the microservice architecture. We inspect an industry case where a leading banking company suffered from issues regarding resiliency. We examine the challenges regarding resilience testing microservice architecture based applications. We compose a small microservice application which we use to study the defensive design patterns and tools and methods available to test microservice architecture resiliency.
  • Räsänen, Mikko (2015)
    The conundrum between market entry and business development activities within innovation companies is generally regarded as a challenge. The energy industry as a whole is in a flux and the sustainable future requires drastic actions to be taken to reduce the effects of the global warming and in adaptation of a circular economy model. Using the industrial innovation company St1 Biofuels Oy as a case, this thesis will identify the decision-making components of an opportunity based target market analysis in a company, which operates in an industry with notable resource scarcity, policy regulations and variable business models. In response to this hypothesis, this study suggests an opportunity based target market analysis model that illustrates a new framework to study target markets in a systematic and analytical manner. For the purpose of the case with St1 Biofuels Oy, a market intelligence tool was created to store and process the market data and illustrate the most essential components of the theoretical model. The case study demonstrates the utilization of the opportunity model presenting the internationalization criteria and justification to a potential new R&D concept investment decision. The implications of this thesis contribute to the decision-making of the case and aid in demonstrating analytical justification for internationalization on a strategic decision-making level. This thesis introduces relevant literature to the topic and reflects the existing theories to the new model concept design. Structure and empiric base in this study were drawn together from two-stage data collection, including extensive market research and investment calculations along with semi-structured interviews with the specialists of the case company. The results of this thesis present a theoretical model and the functioning of the model is then piloted with the case study variables of St1 Biofuels Oy. Based on the discussion in this thesis, further research is suggested reflecting the model as a theoretical framework in strategic marketing planning and value-based selling studies.
  • Seppälä, Eemi (2019)
    Methane(CH4) is a powerful greenhouse gas and even though the CH4 concentrations in the atmosphere have been increasing rapidly since the year 1750, there still remains large uncertainties in the individual source terms to the global CH4 budget. Measuring the isotopic fractions from various CH4 sources should lead to new knowledge on the processes involving CH4 formation and emission pathways. Nowadays stable isotope measurements for various CH4 sources are quite routinely made, but radiocarbon measurements have for long been too expensive and time consuming. For this reason a new CH4 sampling system for radiocarbon measurements was developed at the Laboratory of Chronology of University of Helsinki. The system allows sampling directly from the atmosphere or from different environmental sources using chambers. To demonstrate the functionality of the system it was tested and optimized in various laboratory experiments and in the field. The laboratory measurements showed that before combustion of CH4 to carbon dioxide(CO2), ambient carbon monoxide(CO) and CO2 can be removed from the sample gas by a flow for more than 10 hours with a flowrate of 1 l/min. After the CO and CO2 removal the CH4 in the sample gas is combusted to CO2. The combustion efficiency for CH4 was 100% with a flowrate of 0.5 l/min. After CH4 is combusted to CO2 it is then collected to molecular sieves and can be later on analyzed using accelerator mass spectrometer. The laboratory measurements, however, showed that due to adsorption of nitrogen(N2) to the molecular sieves, the 1g of molecular sieve material used in molecular sample sieve tubes was not sufficient for low concentration samples where the sampling times are very long. In the field, CH4 was collected from the atmospheric ambient air at Hyytiälä SMEAR II station, Juupajoki, Finland, and from tree and soil chambers. The radiocarbon content of the atmospheric CH4 was 102.27 ± 0.02 percent Modern Carbon (pMC) and 101.40 ± 0.02 pMC. These values were much lower than the expected values, indicating a large spatial and temporal variability. The CH4 collected from chambers closed around tree-stems had a radiocarbon content had of 113.60 ± 0.37 pMC, which was slightly higher than the 108.71 ± 0.37 pMC measured from soil chambers located in the nearby Siikaneva peatland. This indicated that a larger amount of CH4 emitted from peatland surface was recently fixed near the soil surface and a larger amount of the CH4 emitted from tree-stem surface was from older origin transported via roots from the deeper depths of the soil. There is, however, a possibility that the lower radiocarbon content from the peatland surface emitted CH4 was due to a significant contribution from old CH4 fixed before bomb effect, and which is diffused from deeper depths of the soil. This would explain the results from the autumn campaign where the radiocarbon contents were 91.84 ± 0.03 during nighttime and 104.26 ± 0.03 pMC during daytime. These results also indicated that during the daytime more of the emitted CH4 is fixed near the surface of the peatland soil. One additional CH4 sample was collected in January 2019 from the atmospheric ambient air at Kumpula, Helsinki, Finland using a significantly larger molecular sample sieve. This sample had a radiocarbon content of 52.40 ± 0.21 pMC. The old carbon in the sample originated from a fossil methane used in earlier laboratory experiments and indicated that the regeneration process for the larger sample sieve was incomplete. Overall the system functions very well, while collecting samples from environmental chambers, as the CH4 concentrations are left to build-up before collecting the sample. For atmospheric samples, for which the sampling times are higher, the sample sieve size and the regeneration time and temperature will have to be further investigated. In the future, more measurements of the radiocarbon content for individual CH4 sources are needed to provide better knowledge on the CH4 pathways. This portable system allows an efficient way to collect CH4 samples for radiocarbon analyzes from various locations.
  • Leinonen, Sara (2019)
    The literature part of the study reviewed the recommended gluten quantification method, immunological ELISA R5. R5 is a monoclonal antibody that recognizes mainly the epitope that is abundant in especially gluten protein subgroup, ω-gliadin. The current PWG-gliadin reference material used in ELISA leads to inaccuracy of the gluten content, because it cannot represent sample materials that differ in their gliadin composition. The aim of the experimental study was to compare the prolamin compositions of different wheat cultivars and their reactivity against R5 antibody in sandwich ELISA. The aim was to find the most suitable ratio of barley prolamin, C-hordein, to be used as a reference material for wheat gluten quantification. The ω-gliadin proportions of different cultivars were calculated from RP-HPLC-chromatograms. In order to compare the total wheat gluten reactivity of the cultivars in ELISA R5 with gliadin standard and C-hordein in different ratios (10, 20 and 30% in BSA), Km-values that measure the rate of sensitivity in the assay, were calculated. The method to separate gliadin- and glutenin subgroups in RP-HPLC was optimized (solvent to extract gliadin and glutenin, temperature, injection volume, gradient). For cv. Crusoe the ω-, α/β- and γ-gliadins and HMW- and LMW-glutenins were identified. The selected wheat cultivars were categorized into four groups. The proportion of ω-gliadin in total gliadin ranged from 0.8 to 14.1% between the cultivars, whereas for PWG-gliadin this has been reported to be 7.7%. In terms of similar reactivity (Km-value) in ELISA, 20% C-hordein was found to be the most suitable reference material (Km 90) for the selected wheat cultivars (Km average 92), instead of the current gliadin standard (Km 68). The advantage of C-hordein standard is that the concentration and thus reactivity can be adjusted to match the sample materials with different prolamin profiles. Unlike with current gliadin reference material, it can be used without any conversion factors, which improves the method accuracy.
  • Leppänen, Maija (2014)
    This thesis studies the current level of environmental management in the Ship Power business division within Wärtsilä corporation and aims to identify the related development needs. Hitherto environmental management has been mainly coordinated at the corporate level and implemented in local companies distributed geographically. Due to the recent organizational changes, however, the significance of division level environmental management in Ship Power has increased. The research goal is approached by examining the central elements of corporate environmental management and the challenges that the organizational structure places for it. Based on the findings, suggestions for further actions are given in order to develop the environmental management in Ship Power. Empirical data was collected through 35 qualitative interviews with Wärtsilä employees from different functions, business lines, and local companies in order to get a comprehensive view of environmental management in the Ship Power related activities. The interviews were semi-structured in order to provide answers for certain areas of concern, but also to enable the disclosure of topics not defined by the interviewer. The data is categorized into themes according to the theoretical background, and its analysis is based on inductive reasoning. Based on the findings, the environmental management in Ship Power is divided into two dimensions. The product-related environmental questions are handled in the business lines and the operational issues in the local companies. This fragmentation of the environmental knowledge causes inconsistent environmental focus at different organizational levels, and creates challenges to information sharing across the organization. The lack of corporate instructions on the environmental management system implementation has led to diverse practices in the local companies, and the lack of standardized documentation makes the internal comparison between them difficult. Therefore the experience gained from the local management systems does not support organizational learning throughout the corporation. While the product-related environmental aspects are the core of the business strategy, more attention could be paid to operational environmental management in Ship Power. For instance, the sharing of environmental knowledge could be strengthened in order to enhance employee awareness of the corporate practices and to facilitate the discussion of the best practices between the local units. A standardized documentation system would facilitate internal benchmarking and provide a means for centralized environmental performance follow-up. Because the local management systems are not sufficient to cover the global business processes, it would also be important to identify the environmental aspects in the Ship Power division. Furthermore, visible communication of the common environmental targets would help to create consistent environmental focus in Ship Power.
  • Heikkonen, Hanna-Lotta (2014)
    The goal of this research was to produce guidelines for an eco-labeling program of wood and paper products in the U.S. market. The factors affecting consumers’ willingness to pay for eco-labeled wood and paper products were examined using a metaregression analysis. A systematic literature review was conducted to examine what are the preferable on-product label characteristics. Results show that consumers in North America are willing to pay less for eco-labeled wood and paper products than European consumers. Wooden and durable goods are able to capture larger price premiums compared to less durable wood/paper products. Consumers are willing to pay more for eco-labeled products where the labels provide more information to the consumers. Among demographic variables, age is shown to positively influence the amount consumers are willing to pay for eco-labeled wood and paper products. Among desirable label characteristics contact information of the labeling agency and information about the environmental effects of the product were found important in addition to information enabling product comparison. Environmental non-governmental organizations are perceived as the most credible labeling providers as shown in past studies.
  • Elfving, Karoliina (2022)
    Catcher-protein and Tag-peptide originate from split CnaB domains of Gram-positive bacteria surface proteins, which are stabilized by spontaneous intramolecular isopeptide bonds formed between lysine and asparagine residues. However, there is a limited number of non-cross-reacting Catcher and Tag pairs available where the reaction occurs close to the diffusion limit, and which can be used in multiple fragment ligation to construct recombinant fusion proteins. Therefore, a new Catcher/Tag system – LplCatcher/LplTag – was developed in our group from CnaB domain of Lactobacillus plantarum. However, the ligation efficiency of this pair needs to be improved to expand the application possibilities. Therefore, there is a need for efficient library screening method, which allows to detect improved protein-peptide pairs where the covalent interaction takes place rapidly. In this study a new high-throughput in-vivo screening system was developed for visualizing the ligation of Catcher/Tag fusion proteins using splitFAST fluorogenic reporter system for detecting the phenotype, and Fluorescence-activated cell sorting (FACS) for separating the variants at single cell level based on fluorescence intensity. splitFAST is a system engineered by splitting a fluorescent protein named Fluorescence-Activating and absorption-Shifting Tag (FAST) into CFAST and NFAST. The system can be utilized in visualizing the protein interactions because once NFAST and CFAST associate, in the presence of a fluorogen, they form the active and highly fluorescent FAST protein. Herein, Catcher-protein was fused with CFAST and Tag-peptide with NFAST, which allowed detecting the Catcher-Tag ligation ratio based on fluorescence with splitFAST system. Next, a screening system was developed for detecting Catcher variants with improved ligation efficiency. The developed high-throughput screening system showed high potential since visualizing the protein ligation was possible, and hence the system could help in expanding the Catcher/Tag toolbox by allowing large mutant library analyzes.
  • Eklund, Tommy (2013)
    Large screens, interactive or not, are becoming a common sight at shopping centers and other public places. These screens are used to advertise or share information interactively. Combined with the omnipresence of smartphones this gives rise for a unique opportunity to join these two interfaces and to combine their strengths and complement their weaknesses. Smartphones are very mobile thanks to their small size and can access information virtually from anywhere, but suffer from overflow of information. Users have too many applications and web sites to search relevant information to find what they want or need in a timely fashion. On the other hand, public screens are too large to provide information everywhere or in a personalized way, but they do often have the information you need, when and where you need it. Thus large screens provide an ideal place for users to select content onto their smartphones. Large screens also have the advantage of screen size and research has indicated that using a second screen with small handheld devices can improve the user experience. This thesis undertook design and development of a prototype Android application for existing large interactive public screen. The initial goal was to study the different aspects of personal mobile devices coupled with large public screens. This large screen interface is also under development as a ubiquitous system and the mobile application was designed to be part of this system. Thus the design of the mobile application needed to be consistent with the public screen. During the development of this application it was observed that the small mobile screen could not support the content or interactions designed for a much larger screen because of its small size. As a result this thesis focuses on developing a prototype that further research could draw upon. This lead to a study of small screen graph data visualization and previous research on mobile applications working together with large public screens. This thesis presents a novel approach for displaying graph data designed for large screens on a small mobile screen. This work also discusses many challenges and questions related to large screen interaction with mobile device that rose during the development of the prototype. An evaluation was conducted to gather both quantitative and qualitative data on the interface design and the consistency with the large screen interface to further analyze the resulting prototype. The most important findings in this work are the problems encountered and questions raised during the development of the mobile application prototype. This thesis provides several suggestions for future research using the application, the ubiquitous system and the large screen interface. The study of related work and prototype development also lead to suggestion of design guidelines for this type of applications. The evaluation data also suggests that the final mobile application design is both consistent with and performs better than a faithful implementation of the visuals and interaction model of the original large screen interface.
  • Ruottu, Toni (Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2011)
    As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.