Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Singh, Maninder Pal (2016)
    Research in healthcare domain is primarily focused on diseases based on the physiological changes of an individual. Physiological changes are often linked to multiple streams originated from different biological systems of a person. The streams from various biological systems together form attributes for evaluation of symptoms or diseases. The interconnected nature of different biological systems encourages the use of an aggregated approach to understand symptoms and predict diseases. These streams or physiological signals obtained from healthcare systems contribute to a vast amount of vital information in healthcare data. The advent of technologies allows to capture physiological signals over the period, but most of the data acquired from patients are observed momentarily or remains underutilized. The continuous nature of physiological signals demands context aware real-time analysis. The research aspects are addressed in this thesis using large-scale data processing solution. We have developed a general-purpose distributed pipeline for cumulative analysis of physiological signals in medical telemetry. The pipeline is built on the top of a framework which performs computation on a cluster in a distributed environment. The emphasis is given to the creation of a unified pipeline for processing streaming and non-streaming physiological time series signals. The pipeline provides fault-tolerance guarantees for the processing of signals and scalable to multiple cluster nodes. Besides, the pipeline enables indexing of physiological time series signals and provides visualization of real-time and archived time series signals. The pipeline provides interfaces to allow physicians or researchers to use distributed computing for low-latency and high-throughput signals analysis in medical telemetry.
  • Laukkanen, Janne Johannes (2018)
    The vast amount of data created in the world today requires an unprecedented amount of processing power to be turned into valuable information. Importantly, more and more of this data is created on the edges of the Internet, where small computers, capable of sensing and controlling their environments, are producing it. Traditionally these so-called Internet of Things (IoT) devices have been utilized as sources of data or as control devices, and their rising computing capabilities have not yet been harnessed for data processing. Also, the middleware systems that are created to manage these IoT resources have heterogeneous APIs, and thus cannot communicate with each other in a standardized way. To address these issues, the IoT Hub framework was created. It provides a RESTful API for standardized communication, and includes an execution engine for distributed task processing on the IoT resources. A thorough experimental evaluation shows that the IoT Hub platform can considerably lower the execution time of a task in a distributed IoT environment with resource constrained devices. When compared to theoretical benchmark values, the platform scales well and can effectively utilize dozens of IoT resources for parallel processing.
  • Hirvonen, Juho (2012)
    In this work we study a graph problem called edge packing in a distributed setting. An edge packing p is a function that associates a packing weight p(e) with each edge e of a graph such that the sum of the weights of the edges incident to each node is at most one. The task is to maximise the total weight of p over all edges. We are interested in approximating a maximum edge packing and in finding maximal edge packings, that is, edge packings such that the weight of no edge can be increased. We use the model of distributed computing known as the LOCAL model. A communication network is modelled as a graph, where nodes correspond to computers and edges correspond to direct communication links. All nodes start at the same time and they run the same algorithm. Computation proceeds in synchronous communication rounds, during each of which each node can send a message through each of its communication links, receive a message from each of its communication links, and then do unbounded local computation. When a node terminates the algorithm, it must produce a local output – in this case a packing weight for each incident edge. The local outputs of the nodes must together form a feasible global solution. The running time of an algorithm is the number of steps it takes until all nodes have terminated and announced their outputs. In a typical distributed algorithm, the running time of an algorithm is a function of n, the size of the communication graph, and ∆, the maximum degree of the communication graph. In this work we are interested in deterministic algorithms that have a running time that is a function of ∆, but not of n. In this work we will review an O(log ∆)-time constant-approximation algorithm for maximum edge packing, and an O(∆)-time algorithm for maximal edge packing. Maximal edge packing is an example of a problem where the best known algorithm has a running time that is linear-in-∆. Other such problems include maximal matching and (∆ + 1)-colouring. However, few matching lower bounds exist for these problems: by prior work it is known that finding a maximal edge packing requires time Ω(log ∆), leaving an exponential gap between the best known lower and upper bounds. Recently Hirvonen and Suomela (PODC 2012) showed a linear-in-∆ lower bound for maximal matching. This lower bound, however, applies only in weaker, anonymous models of computation. In this work we show a linear-in-∆ lower bound for maximal edge packing. It applies also in the stronger port numbering model with orientation. Recently Göös et al. (PODC 2012) showed that for a large class of optimisation problems, the port numbering with orientation model is as powerful as a stronger, so called unique identifier model. An open question is if this result can applied to extend our lower bound to the unique identifier model.
  • Mäki, Jussi Olavi Aleksis (2013)
    With the increasing growth of data traffic in mobile networks there is an ever growing demand from the operators for a more scalable and cost efficient network core. Recent successes in deploying Software-Defined Networking (SDN) in data centers and large network backbones has given it credibility as a viable solution for meeting the requirements of even the large core networks. Software-Defined Networking is a novel new paradigm where the control logic of the network is separated from the network elements into logically centralized controllers. This separation of concerns offers more flexibility in network control and makes writing of new management applications, such as routing protocols, easier, faster and more manageable. This thesis is an empirical experiment in designing and implementing a scalable and fault- tolerant distributed SDN controller and management application for managing the GPRS Tunneling Protocol flows that carry the user data traffic within the Evolved Packet Core. The experimental implementation is built using modern open-source distributed system tools such as the Apache Zookeeper distributed coordination service and Basho's Riak distributed key-value database. In addition to the design, a prototype implementation is presented and its performance is evaluated.
  • Alhalaseh, Rola (2018)
    Sensors of different kinds connect to the IoT network and generate a large number of data streams. We explore the possibility of performing stream processing at the network edge and an architecture to do so. This thesis work is based on a prototype solution developed by Nokia. The system operates close to the data sources and retrieves the data based on requests made by applications through the system. Processing the data close to the place where it is generated can save bandwidth and assist in decision making. This work proposes a processing component operating at the far edge. The applicability of the prototype solution given the proposed processing component was illustrated in three use cases. Those use cases involve analysis performed on values of Key Performance Indicators, data streams generated by air quality sensors called Sensordrones, and recognizing car license plates by an application of deep learning.
  • Lange, Moritz Johannes (2020)
    In the context of data science and machine learning, feature selection is a widely used technique that focuses on reducing the dimensionality of a dataset. It is commonly used to improve model accuracy by preventing data redundancy and over-fitting, but can also be beneficial in applications such as data compression. The majority of feature selection techniques rely on labelled data. In many real-world scenarios, however, data is only partially labelled and thus requires so-called semi-supervised techniques, which can utilise both labelled and unlabelled data. While unlabelled data is often obtainable in abundance, labelled datasets are smaller and potentially biased. This thesis presents a method called distribution matching, which offers a way to do feature selection in a semi-supervised setup. Distribution matching is a wrapper method, which trains models to select features that best affect model accuracy. It addresses the problem of biased labelled data directly by incorporating unlabelled data into a cost function which approximates expected loss on unseen data. In experiments, the method is shown to successfully minimise the expected loss transparently on a synthetic dataset. Additionally, a comparison with related methods is performed on a more complex EMNIST dataset.
  • Gaire, Surakshya (Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2016)
    The objective of this master s thesis was to better understand the impact of black carbon and its distribution in Northern Europe and the Arctic. To achieve the goal of the project, information on the observations relevant to black carbon (BC) pollution in Arctic dataset was collected. For the observational data all main BC measurement campaigns along with active satellite operations were collected. In this study, the BC concentration and deposition was estimated by the System Integrated Modelling of Atmospheric coMposition - a chemical transport model (CTM) SILAM. The model was driven with monitoring atmospheric composition and climate (MACCity), emission database for global atmospheric research hemispheric transport of air pollution (EDGAR-HTAP), and evaluating the climate and air quality impacts of short lived pollutants (ECLIPSE) emission inventories. For the computations, the year 2010 was chosen because of a better availability of data during that year. In the literature section, black carbon process in the atmosphere is explained along with its properties and characteristics. Furthermore, data description and data analysis is included which is followed by interpretation of model output on the seasonal deposition and concentration. As shown by the model-measurement comparison, the model basically captured the measured BC and organic carbon (OC) quite well for all emission inventories. However, the correlation coefficient for OC was still weak for most of the stations in Europe. The overall performance of BC for European stations is substantially better than in the Arctic areas. Deposition for BC and OC shows that the seasonal transport of BC from source regions is evident in the Arctic and near Arctic areas. Patterns of dry deposition is higher in winter period than in summer period. The SILAM model suggests winter period concentration of BC by MACCity and ECLIPSE inventory of 0.23 µg/m3 and 0.26 µg/m3 respectively for year 2010. This study provides a best performing setup for BC modeling , transport and deposition of BC in the Northern Europe and the Arctic despite the absence of ageing process. More observational data from Arctic stations would provide better result and model performance. Finally, the study gives an insight of the quality of existing emission inventories and the capabilities of reproducing seasonal deposition and concentration of BC to the Arctic.
  • Vento, Eero (2017)
    Tourism is one of the main contributors in the fight against poverty, as it has become one of the strongest drivers of trade and prosperity in the global south. Protected area tourism is an especially quickly growing segment of the industry, having an important role in regional development on many rural areas of global south. However, territories labelled as protected areas represent a great variety of spaces. This research aims at unifying the holistic picture of protected area tourism governance by analysing, how protected areas with divergent landownership arrangements, management objectives and associated regulations influence tourism development and its local socio-economic impacts at the grass roots. This comparative case-study survey scrutinizes local-level tourism governance and territorial regulations on three neighbouring protected areas in Taita Taveta County, Kenya. The Tsavo National Parks are state-owned conservancies focusing on conserving biodiversity. LUMO community wildlife sanctuary is a nature tourism project owned and orchestrated by a local community, which aims to advance local socio-economic development via tourism while preserving the environment at the same time. The third area, Sarova, is a private-owned conservancy harnessed solely for nature tourism and profit-making. The areas are liable to same legislative framework and international phenomena have similar influence on them, which makes comparison on their divergent management objectives and local-level regulations expedient. By giving voice to local-level tourism stakeholders, it is possible to point out how the category (i.e. public, private or community) of the land owner and the areas’ respective management objectives influence tourism operations and impact the socio-economic outcomes from both conservation and tourism. The comparative analyses focus first on spatial socially constructed preconditions for tourism development and second, on its developmental outcomes that will primarily be analysed by reflecting the livelihood changes generated by protected area tourism and protection regulations in place. The data-set was gathered during field research in February–March 2016, and it is mainly based on semi-structured interviews with tourism employees, employers and regional experts. The principal method of interviewing is supplemented by observation and statistics, and the data is analysed by thematic and qualitative content analyses. The protected areas’ management objectives and associated regulations have drastic impacts on tourism development within their respective borders. The local administrations of the protected areas were identified as the primary institutions to explain the stark spatial differences in the case-study areas tourist numbers. Instead of the mere ”type” of the landowner, the areas’ respective management objectives and associated regulations determined whether protected area tourism generated livelihoods or other positive socio-economic outcomes. Altogether, similar preconditions for tourism development and similar socio-economic outcomes cannot be expected from all territories labelled as protected areas.
  • Strömgård, Simon (2016)
    Multiple factors determine diversity of diatoms in running waters. Diversity is a complex concept and made up by different components. Diversity can be divided into alpha, beta and gamma diversity. These different types of diversity are regulated by factors operating on a large geographic scale and by local environmental factors. Studies concentrating on diversity patterns of diatoms have become more common in the last 10 years. Especially beta diversity has gotten an increasing interest. Despite the increasing interest in the subject, the driving mechanisms are still not fully understood in aquatic ecosystems. The aim of this theses is to investigate which factors affect alfa and beta diversity in 10 streams in southern Finland. The influence of habitat heterogeneity on beta diversity is also investigated. In addition, the aim is to examine which local environmental factors structure the variation in species composition. The study area covers a 115 km wide area to minimize the effect of large scale factors on species composition. The material consists of environmental data and diatom data from 49 study sites. Land use data used in the study is derived from CORINE Land Cover 2012 data set. All samples were collected during a two-week period (30.7.2014–11.8.2014). Statistical methods used were linear models, generalized linear models (GLM), distance based redundancy analysis (db-RDA) and test for homogeneity of multivariate dispersions (PERMDISP). Water conductivity and light conditions at the study sites were strong environmental factors determining diatom alpha diversity. Habitat heterogeneity showed only a marginally significant positive relationship to beta diversity but a clear trend was visible in the data. The db-RDA results showed that different environmental factors accounted for the variation in species composition. Conductivity, light, water color, water temperature and stream width were important factors explaining variation in species composition. These results suggest that there is a possible connection between habitat heterogeneity and beta diversity. Further research in the subject should be done to determine if there is a significant relationship. The local environmental factors are important for structuring species composition. Possible anthropogenic stress factors influencing stream ecosystems can affect patterns of beta diversity and should be emphasizes in coming research.
  • Laurila, Tiia (2018)
    Differentiaaliliikkuvuusspektrometri (Differential Mobility Particle Sizer; DMPS) -laitteistoa voidaan käyttää ilmakehän aerosolihiukkasten lukumääräkokojakauman mittaamiseen. DMPS-laitteisto koostuu impaktorista, kuivaajasta, bipolaarisesta diffuusiovaraajasta, differentiaaliliikkuvuusanalysaattorista (Differential Mobility Analyzer; DMA) ja kondensaatio- hiukkaslaskurista (Condensation Particle Counter; CPC). Tässä työssä verrataan DMPS-laitteistossa rinnakkain mittaavan muokatun A20 CPC:n ja TSI 3776 CPC:n laskentastatistiikkaa. Pienimmillä aerosolihiukkasilla on vaikutus ympäristöön ja terveyteen, minkä takia on kasvava tarve mitata tarkasti myös pienimpien hiukkasten kokojakaumaa. Aerosolihiukkasten lukumääräkokojakaumaan ja siitä johdettavien suureiden epävarmuuksia ei kuitenkaan tunneta vielä täysin. Työssä pyritään parantamaan perinteisen CPC:n laskentastatistiikkaa ja tutkimaan lukumääräkokojakauman sekä siitä johdettavien suureiden, kuten muodostumisnopeuden (Formation Rate; J) ja kasvunopeuden (Growth Rate; GR), epävarmuuksia. Perinteinen, ilman suojavirtausta toimiva, CPC voidaan muokata havaitsemaan jopa alle 3 nm hiukkasia kasvattamalla lämpötilaeroa saturaattorin ja kondenserin välillä ja muuttamalla aerosolivirtausta. Tässä työssä A20 CPC:n aerosolivirtaus optiikan läpi kasvatettiin 2.5 litraan minuutissa diffuusiosta johtuvien häviöiden minimoimiseksi ja laskentastatistiikan parantamiseksi. Verrattuna TSI 3776 CPC:hen muokatulla A20 CPC:llä on 50 kertaa suurempi aerosolivirtaus, joten voimme olettaa, että muokattu A20 mittaa TSI 3776 UCPC:hen verrattuna enemmän hiukkasia pienemmällä epävarmuudella. Muokatulla A20 CPC:llä on parempi laskentastatistiikka, jonka ansiosta kokojakauman laskennasta johtuva suhteellinen virhe on pienempi. Muokatulla A20 CPC:llä on TSI 3776 CPC:hen verrattuna 50 kertaa suurempi aerosolivirtaus ja se laskee keskimäärin 50 kertaa enemmän hiukkasia koko DMPS-laitteiston mittaamalla kokoalueella (1-40 nm). Muokatulla A20 CPC:llä laskettu GR on noin 60% suurempi pienimmillä (3-6 nm) hiukkasilla ja noin 3% suurempi 6-11 nm hiukkasilla. Myös J on noin 30% suurempi muokatulla A20 CPC:llä laskettuna 3-6 nm hiukkasille. CPC:n laskennasta johtuva epävarmuus on syytä huomioitava määritettäessä DMPS-mittauksen kokonaisvirhettä. Laskentastatistiikalla on merkitystä paitsi lukumääräkokojakaumaan, myös sen johdannaissuureisiin.
  • Salmi, Joni (2019)
    Docker is an emerging technology that makes Linux containers easy to use for developers and system administrators. Unlike virtual machines, Linux containers share host OS kernel and are isolated using kernel features. Docker containers are lightweight and package applications in a distributable format known as Docker images. In this paper, we conduct a literature review and provide an overview to the Docker ecosystem. The literature review summarizes the current state of research and reports relevant and most important findings. We explore use cases, performance, security and orchestration of containers and reflect that to virtual machines and bare-metal.
  • Hansson, Kristian (2019)
    Reunalaskennan tarkoituksena on siirtää tiedonkäsittelyä lähemmäs tiedon lähdettä, sillä keskitettyjen palvelinten laskentakyky ei riitä tulevaisuudessa kaiken tiedon samanaikaiseen analysointiin. Esineiden internet on yksi reunalaskennan käyttötapauksista. Reunalaskennan järjestelmät ovat melko monimutkaisia ja vaativat yhä enemmän ketterien DevOps-käytäntöjen soveltamista. Näiden käytäntöjen toteuttamiseen on löydettävä sopivia teknologioita. Ensimmäiseksi tutkimuskysymykseksi asetettiin: Millaisia teknisiä ratkaisuja reunalaskennan sovellusten toimittamiseen on sovellettu? Tähän vastattiin tarkastelemalla teollisuuden, eli pilvipalveluntarjoajien ratkaisuja. Teknisistä ratkaisuista paljastui, että reunalaskennan sovellusten toimittamisen välineenä käytetään joko kontteja tai pakattuja hakemistoja. Reunan ja palvelimen väliseen kommunikointiin hyödynnettiin kevyitä tietoliikenneprotokollia tai VPN-yhteyttä. Kirjallisuuskatsauksessa konttiklusterit todettiin mahdolliseksi hallinnoinnin välineeksi reunalaskennassa. Ensimmäisen tutkimuskysymyksen tuloksista johdettiin toinen tutkimuskysymys: Voiko Docker Swarmia hyödyntää reunalaskennan sovellusten operoinnissa? Kysymykseen vastattiin empiirisellä tapaustutkimuksella. Keskitetty reunalaskennan sovellusten toimittamisen prosessi rakennettiin Docker Swarm -konttiklusteriohjelmistoa, pilvipalvelimia ja Raspberry Pi -korttitietokoneita hyödyntäen. Toimittamisen lisäksi huomioitiin ohjelmistojen suorituksenaikainen valvonta, edellisen ohjelmistoversion palautus, klusterin laitteiden ryhmittäminen, fyysisten lisälaitteiden liittäminen ja erilaisten suoritinarkkitehtuurien mahdollisuus. Tulokset osoittivat, että Docker Swarmia voidaan hyödyntää sellaisenaan reunalaskennan ohjelmistojen hallinnointiin. Docker Swarm soveltuu toimittamiseen, valvontaan, edellisen version palauttamiseen ja ryhmittämiseen. Lisäksi sen avulla voi luoda samaa ohjelmistoa suorittavia klustereita, jotka koostuvat arkkitehtuuriltaan erilaisista suorittimista. Docker Swarm osoittautui kuitenkin sopimattomaksi reunalaitteeseen kytkettyjen lisälaitteiden ohjaamiseen. Teollisuuden tarjoamien reunalaskennan ratkaisujen runsas määrä osoitti laajaa kiinnostusta konttien käytännön soveltamiseen. Tämän tutkimuksen perusteella erityisesti konttiklusterit osoittautuivat lupaavaksi teknologiaksi reunalaskennan sovellusten hallinnointiin. Lisänäytön saamiseksi on tarpeen tehdä laajempia empiirisiä jatkotutkimuksia samankaltaisia puitteita käyttäen.
  • Harhio, Säde (2022)
    The importance of software architecture design decisions has been known for almost 20 years. Knowledge vaporisation is a problem in many projects, especially in the current fast-paced culture, where developers often switch from project to another. Documenting software architecture design decisions helps developers understand the software better and make informed decisions in the future. However, documenting architecture design decisions is highly undervalued. It does not create any revenue in itself, and it is often the disliked and therefore neglected part of the job. This literature review explores what methods, tools and practices are being suggested in the scientific literature, as well as, what practitioners are recommending within the grey literature. What makes these methods good or bad is also investigated. The review covers the past five years and 36 analysed papers. The evidence gathered shows that most of the scientific literature concentrates on developing tools to aid the documentation process. Twelve out of nineteen grey literature papers concentrate on Architecture Decision Records (ADR). ADRs are small template files, which as a collection describe the architecture of the entire system. The ADRs appear to be what practitioners have become used to using over the past decade, as they were first introduced in 2011. What is seen as beneficial in a method or tool is low-cost and low-effort, while producing concise, good quality content. What is seen as a drawback is high-cost, high-effort and producing too much or badly organised content. The suitability of a method or tool depends on the project itself and its requirements.
  • Jokinen, Olli (2024)
    The rise of large language models (LLMs) has revolutionized natural language processing, par- ticularly through transfer learning and fine-tuning paradigms that enhance the understanding of complex textual data. This thesis builds upon the concept of fine-tuning to improve the under- standing of Finnish Wikipedia articles. Specifically, a BERT-based language model is fine-tuned to create high-quality document representations from Finnish texts. The learned representations are applied to downstream tasks, where the model’s performance is evaluated against baseline models. This thesis draws on the SPECTER paper, published in 2020, which introduced a training frame- work for fine-tuning a general-purpose document embedder. SPECTER was trained using a document-level training objective that leveraged document link information. Originally, SPECTER was designed for scientific articles, utilizing citations between articles. The training instances con- sisted of triplets of query, positive, and negative papers, with the aim of capturing the semantic similarity of the documents. This work extends the SPECTER framework to Finnish Wikipedia data. While scientific articles have citations, Wikipedia’s cross-references are used to build a document graph that captures the relatedness between articles. Additionally, Wikipedia data is publicly available as a full data dump, making it an attractive choice for the dataset in this thesis. One of the objectives is to demonstrate the flexibility of the SPECTER framework on a new dataset that has a similar networked structure to that of scientific articles. The fine-tuned model can be used as a general-purpose tool for various tasks and applications; however, in this thesis, its performance is measured in topic classification and cross-reference ranking. The Transformer-based language model produces fixed-length embeddings, which are used as features in the topic classification task and as vectors to measure the L2 distance of article vectors in the cross-reference prediction task. This thesis shows that the proposed model, WikiSpecter, optimized with a document-level objective, outperformed baseline models in both tasks. The performance indicates that Finnish Wikipedia provides relevant cross-references that help the model capture relationships across a range of topics.
  • Linnoinen, Krista (2013)
    Mathematics teaching has been an active field of research and development at the Department of Mathematics and Systems Analysis at Aalto University. This research has been motivated by a desire to increase the number of students that pass compulsory basic mathematics courses without compromising on standards. The courses aim to provide the engineering students with the mathematical skills needed in their degree programmes so it is essential that a proper foundation is laid. Since 2006, a web-based automated assessment system called STACK has been used on basic mathematics courses for supplementary exercises to aid learning at Aalto University. In this thesis, computer-aided mathematics teaching and, in particular, automated assessment are studied to investigate what effect attempting to solve online exercises has on mathematical proficiency. This is done by using a Granger causality test. For this, the first two of three basic courses are examined. The concepts relating to learning and computer-aided mathematics teaching as well as the developments, including Mumie, made at Aalto University are first presented. Then, the statistical methodology, the theoretical framework and the test procedure for Granger causality are described. The courses and data, which was collected from STACK and used to quantify mathematical proficiency for the Granger causality test, are then reviewed. Finally, the results and implications are presented. The Granger causality tests show that there exists a Granger-causal relationship such that mathematical proficiency affects the desire to attempt to solve exercises. This holds for both of the interpretations used for quantifying mathematical profiency and all variations of the penalty deducted for incorrect attempts. The results imply that the exercises are too difficult for the students and that students tend to give up quickly. Thus, the Granger causality tests produced statistically significant results to back up what teachers have always known: students are discouraged by failure, but encouraged by success. The results provide teachers with valuable information about the students' abilities and enable teachers to alter the teaching accordingly to better support the students' learning.
  • Forsman, Pauliina (2023)
    The green transition is necessary in mitigating climate change. However, it is not a problem-free development pathway from global justice and social sustainability point of views, as the manufacturing of green technologies require great amounts of minerals from the developing countries. Competition for mineral natural resources is creating growing pressure to increase mining activities, which in many countries involves environmental and human rights issues. This is feared to cause environmental destruction, and inhumane working and living conditions for the people in the mining areas, creating new global inequalities. To avoid this trajectory, demands for a just green transition, in which the benefits and harms of energy systems would be more evenly distributed globally, have been presented. The political pressure to implement the green transition is great. Therefore, many actors worldwide have committed to various carbon neutrality goals and cities play a key role in this. By the decision of the majority of the city councilors, also the city of Helsinki has set an ambitious goal of being carbon neutral by 2030, which requires a fast implementation of the green transition. In this master's thesis, the discussion minutes of the Helsinki city council in the years 2019–2022 were studied with an interpretative approach using discourse analysis as a method. The purpose was to find out how the green transition is discussed in the council and which factors influence the perceptions of the green transition presented there. In addition, the purpose was to research whether the council discussions propose any solutions to solve the challenges of global injustice connected to the green transition or whether those problems were recognized at all. As a result, three different discourses of unproblematic discourse, critical discourse, and must-do discourse were interpreted from the data. The unproblematic discourse viewed the green transition in a positive and/or neutral light, emphasizing the possibilities in climate change mitigation. Economic perspectives were also strongly present in this context. The identified critical discourse covered economic and social grievances related to the green transition, which were considered to be related to security of supply, economy, and ecological and social sustainability. In the third, i.e., the must-do discourse, the meaning of green technology was formed through the mitigation of climate change, which was seen threatening all life on Earth. In this view, global warming itself was seen as the greatest social and justice issue. Discourses and perceptions of Helsinki's green transition are strongly influenced by the city's way of focusing its emission calculations only on reducing the city's direct CO2 emissions. Thus, the social global effects caused by Helsinki's green transition cannot be verified with the city's current evaluation methods. Consequently, the councilors discuss the green transition from a strong local perspective.
  • Toivanen, Aleksi (2020)
    Selainten kehittyessä ohjelmointiympäristönä, keskeiseksi hahmonnuskohteeksi on vakiintunut dokumenttioliomalli. Dokumenttioliomalli mahdollistaa dokumentin tehokkaan hallinnoimisen, mutta toisaalta sen käyttö vaatii syvällistä tuntemusta, eivätkä kaikki sen tarjoamat ohjelmointirajapinnat ole vailla ongelmia – dokumenttioliomallia on helppo käyttää tehottomasti ja sen tietyt osat ovat virheherkkiä. Näihin haasteisiin on vastattu tarjoamalla helpommin hallittavia, korkeamman tason ohjelmointiympäristöjä ja -rajapintoja. Dokumenttioliomalliin liittyvien haasteiden keskeinen asema selainpohjaisessa sovelluskehityksessä on nähtävissä dokumenttioliomalliin hahmontavien sovelluskehysten ja kirjastojen suuressa määrässä. Ratkaisut ja lähestymistavat dokumenttioliomallin haasteisiin vaihtelevat suuresti. Tässä työssä selvitetään miten hahmontavat sovelluskehykset, kirjastot ja kielet lähestyvät dokumenttioliomalliin hahmontamiseen liittyviä ongelmia, ja miten paljon nämä ratkaisut ovat sidottuja juuri dokumenttioliomalliin hahmontamiskohteena. Työssä käsitellään seitsemää analysoitua dokumenttioliomalliin hahmontavaa sovelluskehystä, kirjastoa ja kieltä, jotka on valikoitu niiden käyttämien erilaisten lähestymistapojen ja menetelmien perusteella. Ne pyrkivät abstrahoimaan kehittäjän pääsyä todelliseen dokumenttioliomalliin, näin estäen yleisimpiä virhetilanteita, ja piilottaen hahmontamistaan. Työ avaa analysoitujen sovelluskehysten ja kirjastojen toimintaa, ja kuvaa niiden tekemiä keskeisiä ratkaisuja. Yksittäisten sovelluskehysten analyysistä ei kuitenkaan suoraan pysty yleistämään dokumenttioliomalliin hahmontavien sovelluskehysten toiminnasta analyysin ulkopuolisten sovelluskehysten osalta, mutta tehty analyysi antaa viitteitä vallalla olevista yleisemmistä suuntauksista. Sovelluskehyksien analyyseistä on löydettävissä yhteisiä piirteitä. Analyysi osoittaa dokumenttioliomalliin kohdistuvan optimoidun hahmontamisen ratkaisujen runsauden - samaan lopputulokseen on mahdollista päästä monella eri tavalla, menetelmien tarjotessa toisistaan eroavia vahvuuksia.
  • Stefańska, Marta (2023)
    In this work, molecular mass determination by diffusion-ordered nuclear magnetic resonance spectroscopy was obtained for a series of poly(2-oxazoline)s, polypeptoids and poly(2-oxazine)s. The samples included linear, star like and cyclized homopolymers and block copolymers. The data was calibrated against polyethylene glycol, polystyrene and poly(methyl methacrylate) standards. The results were compared with those obtained by matrix-assisted laser desorption/ionization spectrometry, size exclusion chromatography, rolling-ball viscometry and end-group analyses based on proton nuclear magnetic resonance. It was concluded that in general diffusion-ordered spectroscopy tends to give a very accurate estimation of the masses up to 30 kg/mol in deuterated water and dimethyl sulfoxide, especially after viscosity correction. In addition, nuclear magnetic resonance spectroscopy provides a wealth of information about the samples including their structure and possible impurities. In summary, this methodology could be successfully applied to different polymers and it is invaluable in the case of absence of the standards with similar solubility to analyzed polymers since the viscosity correction enables a comparison of the results measured in different solvents.
  • Rautsola, Iiro (2019)
    Multimodality imaging is an efficient, non-invasive method for investigation of molecular and cellular processes in vivo. However, the potential of multimodality imaging in plant studies is yet to be fully realized, largely due to the lack of research into suitable molecular tracers and instrumentation. Iodine has PET- and SPECT-compatible radioisotopes that have significant advantages over other radioisotopes applied in plant radioisotope imaging, and can be incorporated into small molecules via a variety of reactions. In this master’s thesis, a radioiodination method exploiting a novel, Dowex® H+-mediated addition of iodine for terminal alkynes was optimized and tested on two D-glucose analogues. The goal of the sugar analogue radioiodination was to develop a radioiodinated molecular tracer for plant carbohydrate metabolism studies. The parameters under optimization were activation Dowex® by HCl, reaction temperature, carrier amount, solvent, and evaporation of excess water. The most optimal results were achieved under the following conditions: Dowex® HCl-activated, reaction temperature 95 °C, amount of carrier 3.0 µmol of carrier, cyclohexanol as solvent, and excess water evaporated. The Dowex® approach was compared to electrophilic reactions with Chloramine T and Iodogen, and it was concluded that the Dowex® approach leads to superior radiochemical yields under the optimized conditions. The Dowex® method was successfully tested on the sugar analogues, resulting in a single main product at a satisfactory 50 – 56 % radiochemical yield. The main products were successfully characterized with NMR, and in addition the method was indicated to be regioselective. It is plausible that the developed method may be improved further in terms of radiochemical yield and molar activity, and that the method could prove to be a useful tool for developing novel radiodinated molecular tracers for plant studies.
  • Hunnakko, Joel (2023)
    An optical data bus is a promising solution to provide a fast data transmission from room temperature to quantum devices at low temperatures, which would minimize the heat load into the cryogenic system compared to the conventional electrical cabling. Previously, a similar measurement setup was used to drive a Josephson junction array (JJA) with an optical pulse pattern generated with a mode-locked laser (MLL). A photodiode (PD) was used to convert optical signals to photocurrent signals to drive the JJA at low temperature. There was a long and non-ideal transmission line between the PD and the JJA at their operation temperature of 4 K. The experiments and simulations revealed that the non-idealities in the transmission line caused non-desired reflections. In this work the PD is integrated on a same chip as the JJA. The new on-chip integration provides shorter transmission line with less interfaces. The new compact transmission line promises less electrical signal reflections between the PD and the JJA. A custom-made MLL emitted an original optical single pulse to the optical pulse pair generator. The MLL was operated at a well-defined pulse frequency of 2.3 GHz to produce the single pulses in the desired frequency. An optical time delay circuit (OTD) was applied to the generated pulses in the MLL to time divide the pulses on the desired time delays. The generated optical pulse pair pattern was transmitted from room temperature to the PD in 4 K via an optical polarization maintaining fiber. The fiber was integrated on the top of the PD in 4 K, which was used to drive the JJA sample. The PD was biased with a reverse voltage and the JJA sample with a current. The amplification of optical twin pulses was varied during the measurements. We measured the DC voltage of the JJA sample and the DC photocurrent of the PD simultaneously. The measurement was repeated with several different manually defined optical time delayed twin pulses. The work included optimization of the optical setup. The optimization involved setting the reflective diffraction grating to the optimized position, which was used to filter away undesired wavelengths. The optical pulse pair method used in this work can be used to investigate the maximum speed of the data signals.