Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Martikainen, Noora (2017)
    In the Neoprotezoic Era (1.0Ga – 540Ma) the earth’s climate changed by multiple large glaciations and supercontinent formations and break-ups. Climate changes can be seen from the carbon isotope record, where the steep negative excursions indicate glaciation. The Neoproterozoic Era is known for the Snowball Earth events, when the earth has been covered by snow even in the equator. At the same time, there was Rodinia supercontinent break-up and continent regroup, which led to Mozambique Ocean to form and close during the East African orogen 650 – 620 Ma ago. The Taita Hills is located in South Kenya and lies in the Mozambique Belt. Taita Hills is divided into the Kurase and Kasigau groups. The Kurase group is considered to be metasediments from a continental shelf and the Kasigau group from the continental margin. The Kurase group contains multiple sedimentary carbonate rock layers, which are surveyed by Horkel et al. (1979) and offers a base for this study. The sedimentary carbonate rock samples were analysed with MP-AES for the elemental concentrations of Ca, Mg, Fe, Sr and Mn and for the δ13C and δ18O composition of carbonate. Three of the samples were calcites (Mg/Ca ratio 0.00 – 0.04) and 45 were dolomites (Mg/Ca 0.38 – 0.61). The δ13C values varies from -1.55 to 6.96‰ and the δ18O composition were between -10.2 and -0.66‰. The Mn/Sr ratio indicates that the samples have retained primary δ13C composition. The δ13C composition differs remarkably between the calcite and the dolomite samples, which might indicate that the calcite carbonates have a secondary composition even if the Mn/Sr ratio is low. The positive δ13C values represents the interglacial time. The δ13C compositions of the global δ13C record indicate that the Taita Hills region sedimentary carbonate rocks were precipitated before or after the Sturtian Snowball Earth event.
  • Garmuyev, Pavel (2022)
    RESTful web APIs have gained significant interest over the past decade, especially among large businesses and organizations. However, an important part of being able to use these public web APIs is the knowledge on how to access, consume, and integrate them into applications. Since developers are the primary audience that will be doing the integration it is important to support them throughout their API adoption journey. For this, many of today's companies that are heavily invested in web APIs provide an API developer portal as part of their API management program. However, very little accessible and comprehensive information on how to build and structure API developer portals exist yet. This thesis presents a conducted exploratory multi-case case study of three publicly available API developer portals of three different commercial businesses. The objective of the case study was to identify the developer (end-user) oriented features and capabilities present on the selected developer portals, in order to understand the kinds of information and capabilities API developer portals could provide for developers in general. The exploration was split into three key focus areas: developer onboarding, web API documentation, and developer support and engagement. Based on these, three research questions were formulated respectively. The data consisted of field notes that described observations about the portals. These notes were grouped by location and action, and analyzed to identify a key feature or capability as well as any smaller, compounding features and capabilities. The results describe the identified features and capabilities present on the studied API developer portals. Additionally, some differences between the portals are noted. The key contribution of this thesis are the results themselves, which can be used as a checklist when building new API developer portal. However, the main limitation of this study is that its data collection and analysis processes were subjective and the findings are not properly validated. Such improvements will remain for future work.
  • Tuppi, Lauri (2017)
    Nowadays even medium-range (~6 days) forecasts are mostly reliable but occasionally the quality of the forecasts collapses suddenly. During a collapse or a bust, the actual forecast is worse than a ‘forecast’ made by using climatological mean values. In this study sudden collapse of predictability will be investigated by using one example case from April 2011. OpenIFS NWP model and ERA-Interim reanalysis were used as primary tools. 13 deterministic forecasts with the best available initial conditions were run but one forecast initialized on the worst day is particularly concentrated on. One ensemble forecast of five members also initialized on the worst day is also investigated in this study. Output of OpenIFS was compared to ERA-Interim. Previous studies have shown that the reasons for European forecast busts can be found in North America. Therefore, the aim of this study is to determine if the incorrect representation of convection over North America lead to a forecast bust over Europe. Besides the main goal, this study discusses how the errors originating from North American convection lead to a forecast bust in Europe 6 days later and this study will also be looking for cause of the forecast bust from initial conditions of ensemble forecast. In this case the sudden collapse of predictability in Europe is caused by NWP models predicting change of weather regime wrong. Also OpenIFS predicts formation of a blocking high over Northern Europe although there are no signs of blocking in reanalysis. In Northern America, where the source of the error is, forecast of evolution of a cluster of thunderstorms fails so also convective forcing to large scale dynamics fails. The error grows and is transported to Europe by Rossby waves. Although none of the members of the ensemble forecast was able to forecast the weather properly in Europe, so much deviation was obtained in the outcomes that comparison of the initial conditions was meaningful. The most important finding was that deeper trough over the Rocky Mountains improves the forecast in Europe. This study was able to show evidence that misrepresented convection over North America caused the forecast to fail in Europe. Moreover, this study was able to clarify how the errors caused by misrepresented convection evolved and lead to the forecast bust in Europe. The error at the beginning of the forecast in North America grows so fast that it is unlikely that it would be due to model parameterizations but the initial conditions must contain errors. These failed forecast are difficult to avoid completely but the easiest way to reduce them is to improve quality of the observations in the Rocky mountains.
  • Karikoski, Antti (2019)
    Data compression is one way to gain better performance from a database. Compression is typically achieved with a compression algorithm, an encoding or both. Effective compression directly lowers the physical storage requirements translating to reduced storage costs. Additionally, in case of a data transfer bottleneck where CPU is data starved, compression can yield improved query performance through increased transfer bandwidth and better CPU utilization. However, obtaining better query performance is not trivial since many factors affect the viability of compression. Compression has been found especially successful in column oriented databases where similar data is stored closely in physical media. This thesis studies the effect of compression on a columnar storage format Apache Parquet through a micro benchmark that is based on the TPC-H benchmark. Compression is found to have positive effects on simple queries. However, with complex queries, where data scanning is relatively small portion of the query, no performance gains were observed. Furthermore, this thesis examines the decoding performance of the encoding layer that belongs to a case database, Fastorm. The goal is to determine its efficiency among other encodings and whether it could be improved upon. Fastorm's encoding is compared against various encodings of Apache Parquet in a setting where data is from a real world business. Fastorm's encoding is deemed to perform well enough coupled with strong evidence to consider adding delta encoding to its repertoire of encoding techniques.
  • Valentine, Nicolas (2023)
    A case study that studied the performance impact of a node.js component when it was refactored from monolith environment into independent service. The performance study studied the response time of the blocking part of JavaScript code in the component. The non blocking part of the code and the added network overhead from the refactoring were excluded from the performance review. Literature review didn’t show any related research that studied the performance impact of a node.js component when it was refactored from monolith into microservices. Many found studies were found that studied the response time and throughput of REST API build with node.js with comparisons to other programming languages. A study were found that related to refactoring an application from monolith into microservices. None of the found studies were directly related to the studied case. It was noted that the response time of the component improved by 46.5% when it was refactored from monolith into microservice. It is possible that when a node.js monolith application grows it starts to affect the throughput of the event loop affecting performance critical components. For the case component it was beneficial to refactor it into independent service in order to gain the 92.6ms in the mean response time.
  • Virtanen, Jussi (2022)
    In the thesis we assess the ability of two different models to predict cash flows in private credit investment funds. Models are a stochastic type and a deterministic type which makes them quite different. The data that has been obtained for the analysis is divided in three subsamples. These subsamples are mature funds, liquidated funds and all funds. The data consists of 62 funds, subsample of mature funds 36 and subsample of liquidated funds 17 funds. Both of our models will be fitted for all subsamples. Parameters of the models are estimated with different techniques. The parameters of the Stochastic model are estimated with the conditional least squares method. The parameters of the Yale model are estimated with the numerical methods. After the estimation of the parameters, the values are explained in detail and their effect on the cash flows are investigated. This helps to understand what properties of the cash flows the models are able to capture. In addition, we assess to both models' ability to predict cash flows in the future. This is done by using the coefficient of determination, QQ-plots and comparison of predicted and observed cumulated cash flows. By using the coefficient of determination we try to explain how well the models explain the variation around the residuals of the observed and predicted values. With QQ-plots we try to determine if the values produced of the process follow the normal distribution. Finally, with the cumulated cash flows of contributions and distributions we try to determine if models are able to predict the cumulated committed capital and returns of the fund in a form of distributions. The results show that the Stochastic model performs better in its prediction of contributions and distributions. However, this is not the case for all the subsamples. The Yale model seems to do better in cumulated contributions of the subsample of the mature funds. Although, the flexibility of the Stochastic model is more suitable for different types of cash flows and subsamples. Therefore, it is suggested that the Stochastic model should be the model to be used in prediction and modelling of the private credit funds. It is harder to implement than the Yale model but it does provide more accurate results in its prediction.
  • Wiikinkoski, Oskari (2013)
    Ylioppilastutkintolautakunta päätyi laskinohjetta uudistaessaan sallimaan kevään 2012 matematiikan ylioppilaskokeessa ensimmäistä kertaa myös niin kutsuttujen CAS-laskinten käytön. Tämä uudistus johti tilanteeseen, jossa osa pitkän matematiikan kokeen tehtävistä oli mahdollista ratkaista pelkästään laskimen avulla. Matematiikan opettajat ovat ilmaisseet huolensa laskinten käytön tuomista haasteista ja uhkista matematiikan opiskelulle ja opetukselle lukiossa sekä myös matematiikan tulevaisuudelle oppiaineena. Tässä tutkielmassa perehdytään teknisten apuvälineiden, erityisesti CAS-laskinten, käytön vaikutuksiin matematiikan opiskelussa sekä opettajien näkemyksiin ja odotuksiin siitä, miten laskimet tulevat muuttamaan lukion matematiikan opetusta ja ylioppilaskoetta. Matematiikan opettajien keskuudessa on herännyt kriittinen keskustelu uusien apuvälineiden käytöstä ja erityisesti CAS-laskinten antamasta edusta pitkän matematiikan ylioppilaskokeessa. Tässä tutkielmassa käydään läpi kevään 2013 pitkän matematiikan kokeen tehtäviä, ja pohditaan millaisen edun CAS-laskinta käyttävä ylioppilaskokelas saa sellaiseen opiskelijatoveriinsa verrattuna, jolla on käytössään tavallinen graafinen laskin ja taulukkokirja. Tehtävien ratkaisuja lähestytään konstruktivistisen oppimiskäsityksen ja -ratkaisuprosessin näkökulmasta. Ratkaisujen lopuksi pohditaan tulevatko ylioppilaskokelaan taidot mitatuksi tehtävän tarkoittamalla tavalla, jos kokelaalla on käytössään uusimmat tekniset apuvälineet. Lisäksi tutkielmassa esitellään mahdollisuuksia hyödyntää CAS-laskinta pitkän matematiikan opetuksen apuvälineenä ja tehdään katsaus saatavilla oleviin matematiikan ohjelmistoihin. Tutkielmassa käydään läpi myös Matemaattisten aineiden opettajien liiton keväällä 2012 tekemän CAS-laskimia koskevan kyselytutkimuksen tuloksia pohjustuksena laskimista käytävälle keskustelulle. Tutkielman tuloksena todettiin, että opettajakunta on jossain määrin kahtiajakautunut suhteessa teknisten apuvälineiden, erityisesti CAS-laskinten, käyttöön matematiikan opetuksen tukena. Yleinen vallitseva mielipide opettajien keskuudessa oli, että ainakin ylioppilaskokeen tehtäviä täytyy miettiä uudelleen, jos CAS-laskinten käyttö aiotaan sallia myös jatkossa. Jopa koko matematiikan kokeen uudistamista ehdotettiin. Tätä näkemystä tukevat myös tässä tutkielmassa pitkän matematiikan tehtävien ratkaisuista saadut kokemukset. Osa kokeen tehtävistä menetti jossain määrin merkityksensä, kun niiden ratkaisemiseen käytettiin apuvälineenä CAS-laskinta. Tutkielmassa havaittiin, että on silti mahdollista luoda myös sellaisia koetehtäviä, jotka edelleen mittaavat ylioppilaskokelaan matemaattisia taitoja luotettavasti.
  • Känsäkoski, Silja (2023)
    Lignin is an abundant aromatic polymer found in renewable biomass sources such as trees and grasses. Lignin is largely formed as a side product in paper and pulping industries, and recent research has been trying to valorize it for value-added products such as fuels and chemicals through catalytic depolymerization. This thesis work consists of two parts: a literature review on lignin and it depolymerization methods and the experimental part where lignin is depolymerized, and the products are analyzed. In the literature review an overview of lignin, its structure and different sources is given. Furthermore, different extraction methods of lignin from the biomass source are reviewed, and more specifically the organosolv process is highlighted. Different products formed in the depolymerization of lignin are presented along with their applications. Depolymerization methods including pyrolysis, oxidative depolymerization, solvolysis and reductive depolymerization are reviewed. Finally, different metal catalysts, with a focus on molybdenum-based ones, used in reductive lignin depolymerization are presented. In the experimental part two molybdenum phosphide catalysts are synthesized and characterized. They are used in the depolymerization of fraunhoferk130 and GVL lignin using ethanolysis in a batch or autoclave reactor. The mass balance of product fractions and monophenol yields are presented. Monophenol yields ranged from 3.5 wt.% to 22.8 wt.%. Additional hydrogen pressure suppresses repolymerization and char formation but has negative impact on monomer yields so the true role of hydrogen gas remains unclear. Increasing reaction temperature led to smaller molar mass but higher char formation. The different catalysts used are compared in the results with the help of the monomer yields, mass balance and molar masses. Overall, the molybdenum-based catalysts showed promise as monomer yields were in lieu of those found in literature and can be synthesized with lower costs than noble metal catalysts.
  • Koskinen, Outi (2017)
    Lignin (wood in Latin) is a natural amorphous, aromatic polymer that acts as the essential glue and support that gives vascular plants their structural rigidity and colour. It is found mostly between but also within the plant cells and in the cell walls. Lignin consists of p-coumaryl (almost exclusively in grasses), coniferyl (common in softwoods) and sinapyl alcohol (common in hardwoods) monomers that form dimers with different linkage types depending on the types of monomer radicals combined together. As the result of lignin biosynthesis is a complex aromatic network where the β – aryl ether (β-O-4) linkage type is the most abundant one between monomer units. Within each type there is a lot of variation: lignins differ from species to species, and from one tissue to the next in the same plant--even within different parts of the same cell. Pulping industry separates lignin from biomass and the lignin waste is combusted on-site as energy for steam generation. Lignin is however potentially a renewable source of aromatic platform compounds that are important in other fields of industry. Many of these platform chemicals are currently obtained from fossil fuel sources. Hence there is an environmentally friendly need to develop efficient methods to convert lignin into high-value products. Rigid molecular structure of lignin and the abundant amount of hydrogen bonds in it makes it highly recalcitrant towards conventional solvents and mild reaction conditions. In addition a considerable sulfur content from the pulping processes establishes a catalyst poison. Thus the processing methods for lignin valorization need to be optimized with proper reaction conditions and effective catalysts while keeping the costs as reasonable as possible. This thesis is divided into literature and experimental sections. The literature section discusses about the chemical structure and biosynthesis of lignin, industrial view of lignin and a short review of recently examined studies of processing methods on lignin concentrating on hydrogen-dependent methods and ionic liquids as the hydrogen source. The experimental section concentrates on a novel ionic liquid in the studies with hydrogenation and hydrogen lysis of aileron, a widely used lignin model compound.
  • Nurttila, Sandra (2013)
    Lignocellulosic biomass has received widespread attention as environmentally benign feedstock for fuels. Biomass consists of cellulose, hemicellulose and lignin and has rather high oxygen content. Different techniques for the conversion of lignocellulose to liquid fuels have been suggested in the literature. In this thesis the emphasis is on the utilization of biomass-derived platform molecules. Platform molecules include eg. ketones, alcohols and carboxylic acids. In the literary section different deoxygenation and C-C coupling reactions for the conversion of biomass-derived platform molecules to larger hydrocarbons have been reviewed. Reaction routes for upgrading of the platform molecules 5-hydroxymethylfurfural, 2-furaldehyde, levulinic acid and some monofunctional compounds have been presented. These paths comprise mainly dehydration, hydrogenation, aldol condensation and ketonization. Heterogeneous catalysis, particularly bifunctional supported catalysts, dominates in this field. The selectivity that may be achieved with homogeneous catalysts is seen as highly desirable and served as the main incentive for the experimental work. Furthermore, the lack of publications in the area of homogeneously catalyzed C-C coupling of biomass-derived compounds also motivated for the work. Herein, 1st row transition metal acetates were utilized as catalysts for the ketonization of biomass-derived levulinic acid and other carboxylic acids. Some experiments were conducted with lignin as well. Reactions were performed under microwave heating or reflux conditions. Products were analyzed by GC-MS, GC-FID, NMR and FTIR. Combinatorial chemistry made it possible to conduct up to twelve reactions simultaneously. More than 160 reactions were performed in less than two months' time. The main product in many of the reactions was aromatic phthalic acid mono 2-ethylhexyl ester. Some other interesting products included hexadecanoic acid, 2,6-dimethyl-2,5-heptadien-4-one, diisooctyl and dibutyl phthalate. Despite small amounts of the products, their presence proves that various compounds may be produced from biomass by tailoring the catalyst and reaction conditions.
  • Forsman, David (2020)
    We develop the theory of categories from foundations up. The thesis culminates in a theorem in which we assert that any concrete functor between categories of models of algebraic theories, where the codomain categories' alphabet does not contain relational information, has a left adjoint functor. This theorem is based on The General Adjoint Functor Theorem by Peter Freyd. The first chapter is about the set theoretic foundations of category theory. We present the needed ideas about recursion so that we may define what is meant by first order predicate logic. The first chapter ends in the exposition of the connection between the Grothendieck universes and the inaccessible cardinals. The second chapter starts our conversation about categories and functors between categories. We define properties of morphisms, subobjects, quotient objects and Cartesian closed categories. Furthermore, we talk about embedding and identification morphisms of concrete categories. Much of the third chapter is to show that the category of small categories is a Cartesian closed category. This leads us to talk about natural transformation and canonical constructions relating to functors. To define equivalences and their generalizations, adjoint functors, natural transformations are needed. The fourth chapter enlarges our knowledge about hom-functors and their adjacent functors, representable functors. The study of representable functors yields a profound lemma called Yoneda lemma. Yoneda lemma implies the fully faithfulness of Yoneda embedding. The fifth chapter concentrates to limit operations in a category, which leads us to talk about completeness. We find out how limit procedures are preserved in constructions and how they behave when functors pass them forward. The last chapter is about adjoint functors. The general and the special adjoint functor theorems, due to Peter Freyd, are proven. Using The General Adjoint Functor Theorem, we prove the existence of a left adjoint functor for all suitable forgetful functors among algebraic categories.
  • Rautaoja, Jukka (2020)
    Tässä tutkielmassa esitetään Cauchy-Eulerin yhtälö, sen ratkaisu ja kaksi sovellusta sen monista sovelluksista. Cauchy-Eulerin yhtälö on homogeeninen lineaarinen differentiaaliyhtälö, jolla on muuttujakertoimet. Ensimmäisessä luvussa perustellaan aiheen valinta sekä kerrotaan perustietoja lineaarisista differentiaaliyhtälöistä ja Cauchy-Eulerin yhtälön historiasta. Toisessa luvussa esitetään Cauchy-Eulerin yhtälö ja osa yhtälön ratkaisun todistukseen tarvittavista aputuloksista. Kolmannessa luvussa todistetaan sekä toisen kertaluvun että n:nnen kertaluvun ratkaisu yhtälölle. Molempia todistuksia ennen esitetään todistuksien kannalta merkittävimmät aputulokset. Tärkeimpänä esimerkkinä mainittakoon Laplace-muunnos. Toisen kertaluvun ratkaisu todistetaan, koska se on helpompi ymmärtää, sitä tarvitaan molempiin sovelluksiin, ja koska se auttaa ymmärtämään n:nnen kertaluvun ratkaisua. Neljännessä luvussa yhtälölle esitetään kaksi sovellusta: Laplacen yhtälön napakoordinaattiesityksen ratkaisu ja Black-Scholesin yhtälön ratkaisu. Laplacen yhtälöä hyödynnetään kuvaamaan fysiikassa ajasta riippumattomissa tilanteissa tapahtuvia muutoksia esimerkiksi sähkömagneettisissa potentiaaleissa, tasaisissa lämpötiloissa ja hydrodynamiikassa. Yhtälön napakoordinaattiesitystä käytetään sellaisissa tilanteissa, joissa ympäristö on ympyrän rajaama kiekko. Black-Scholesin yhtälöä käytetään finanssimatematiikassa kuvaamaan osakeoptioiden arvonmuutosta. Siten molempia yhtälöitä käytetään paljon, ja ne ovat CauchyEulerin yhtälön tärkeitä sovelluksia. Viidennessä luvussa esitellään tutkielman tulokset. Tuloksina esitetään Cauchy-Eulerin yhtälön n:nnen kertaluvun ratkaisu, Laplacen yhtälön napakoordinaattiesityksen ratkaisu ja Black-Scholesin yhtälön ratkaisu. Sekä Laplacen yhtälön napakoordinaattiesityksen että Black-Scholesin yhtälön ratkaisu saadaan muuttujien separoinnin avulla, jolloin saadaan kaksi eri yhtälöä, joista toinen on toisen kertaluvun Cauchy-Eulerin yhtälö, jonka ratkaisu aiemmin todistettiin.
  • Rintala, Jasmiina (2020)
    Tämä tutkielma käsittelee Cauchyn jakaumaa ja siitä muunneltua log-Cauchyn jakaumaa. Cauchyn jakauma on jatkuva ja todella paksuhäntäinen ja sallii siksi poikkeamia aineistossa, jonka takia se on potentiaalinen vaihtoehto erilaisten luonnonilmiöiden mallintamisessa. Käyn ensimmäisessä luvussa läpi, mikä standardi Cauhcyn jakauma on: mitä matemaattisia määritelmiä sen johtamiseen tarvitaan ja kuinka se johdetaan. Tutkielmassa todistetaan, että tällä jakaumalla ei ole olemassa odotusarvoa eikä varianssia. Puolestaan moodi ja mediaani voidaan laskea ja huomataankin, että ne ovat Cauchyn jakaumalla samat. Käsittelen lyhyesti logaritmisen Cauchyn jakauman ja johdan sen tiheys- ja kertymäfunktiot. Tämän jälkeen perehdyn sekä Cauchyn että log-Cauchyn jakaumien erilaisiin sovelluksiin. Jotta lukija saa käsityksen jakaumien käyttötarkoituksista, käyn läpi useita tutkimuksia kevyesti. Muutamassa tutkimuksessa huomataan, että Cauchyn ja log-Cauchyn jakauma sopivat kyseisiin mallinnuksiin hyvin. Viimeisessä osiossa pohdin Cauchyn jakauman mahdollisuuksia lukio-opetuksessa uusimman lukion opetussuunnitelman (2019) pohjalta. Esitän lopuksi oman ehdotukseni projektityöstä pitkän matematiikan kurssille MAA12 ja perustelen sen sopivuutta kyseiselle valinnaiselle kurssille. Tämä projektityö kehittää oppilaan laaja-alaista osaamista ja luo hyvän kokonaisuuden oppiainerajat ylittävään opetukseen.
  • Porna, Ilkka (2022)
    Despite development in many areas of machine learning in recent decades, still, changing data sources between the domain in a model is trained and the domain in the same model is used for predictions is a fundamental and common problem. In the area of domain adaptation, these circum- stances have been studied by incorporating causal knowledge about the information flow between features to be utilized in the feature selection for the model. That work has shown promising results to accomplish so-called invariant causal prediction, which means a prediction performance is immune to the change levels between domains. Within these approaches, recognizing the Markov blanket to the target variable has served as a principal workhorse to find the optimal starting point. In this thesis, we continue to investigate closely the property of invariant prediction performance within Markov blankets to target variable. Also, some scenarios with latent parents involved in the Markov blanket are included to understand the role of the related covariates around the latent parent effect to the invariant prediction properties. Before the experiments, we cover the concepts of Makov blankets, structural causal models, causal feature selection, covariate shift, and target shift. We also look into ways to measure bias between changing domains by introducing transfer bias and incomplete information bias, as these biases play an important role in the feature selection, often being a trade-off situation between these biases. In the experiments, simulated data sets are generated from structural causal models to conduct the testing scenarios with the changing conditions of interest. With different scenarios, we investigate changes in the features of Markov blankets between training and prediction domains. Some scenarios involve changes in latent covariates as well. As result, we show that parent features are generally steady predictors enabling invariant prediction. An exception is a changing target, which basically requires more information about the changes in other earlier domains to enable invariant prediction. Also, emerging with latent parents, it is important to have some real direct causes in the feature sets to achieve invariant prediction performance.
  • Lumen, Dave (2016)
    Fluori-18 on positroniemissiolla hajoava nuklidi, jonka puoliintumisaika on 110 minuuttia. Hajoamistapansa johdosta F-18:a voidaan käyttää merkkiaineena positroniemissiotomografiassa (PET), jossa se on käytetyin radionuklidi sopivan pitkän puoliintumisaikansa ja energiansa (0,64 MeV) takia. Uusia radiolääkeaineita pyritään kehittämään erilaisiin lääketieteellisiin sovellutuksiin ja suurta mielenkiintoa ovat keränneet bioperäiset nanomateriaalit, joiden suuri pinta-ala/tilavuus -suhde tarjoaa paljon erilaisia mahdollisuuksia mm. spesifiseen lääkeainekuljetukseen. Suora F-18:n nukleofiilinen substituutio on usein vaikea ja joskus jopa mahdoton suorittaa kompleksisissa ja monisubstituoiduissa molekyyleissä, joita ei ole aktivoitu. Esimerkiksi monet biomolekyylit eivät kestä suorassa fluorauksessa käytettäviä korkeita lämpötiloja ja liuottimia. Tällöin käytetään apuna F-18-leimattuja prosteettisia ryhmiä esim. sukkinimidyyli-4-[18F]fluorobentsoaattia ([18F]SFB), joiden avulla saadaan F-18 leimattua biomolekyyliin. Prosteettisten ryhmien valmistus vaatii usein monivaiheiset synteesit, mikä tekee niiden käytöstä haastavaa. [18F]SFB pystyy reagoimaan peptidiketjun vapaiden aminoryhmien kanssa (lysiini ja terminaalinen amino-ryhmä) paljon miedommissa olosuhteissa kuin suorassa radiofluorauksessa. F-18 saadaan näin liitettyä biomolekyyliin, jolloin biomolekyyli saadaan leimattua radionuklidilla vahingoittamatta biomolekyylin rakennetta. Tässä tutkielmassa leimattiin CCMV-virusten proteiinikapsidin pintarakennetta eri puskuriliuoksissa käyttäen [18F]SFB:tä. Ensin [18F]SFB:n automatisoitu synteesi optimoitiin laboratorion laitteistolle sopivaksi, jotta sitä voitiin valmistaa toistettavasti hyvällä radiokemiallisella saannolla. Tämän jälkeen CCMV-virusten leimautuvuutta testatiin neljässä eri puskuriliuoksessa pH-välillä 6-8. Puskuriliuoksina käytettiin fosfori- ja boraattipuskureita. [18F]SFB:n synteesi saatiin optimoitua ja tuotetta saatiin tuotettua hyvillä saannoilla, puoliaikakorjatun saannon ollessa 71 ± 3 % ja radiokemiallisen puhtauden ollessa yli 90 %. Virukset saatiin leimattua ja parhaimmillaan päästiin 12,9 ± 14,9 %:n leimautuvuuteen. Leimausta ei saatu kuitenkaan tehtyä toistettavasti ja leimatuista viruksista läpäisyelektronimikroskopialla (TEM) otetuista kuvista nähtiin, että virukset olivat hajonneet leimauksen aikana.
  • Pirttikoski, Anna (2022)
    Ovarian cancer is the most lethal gynecological cancer and high-grade serous ovarian cancer (HGSOC) is the most common type of it. HGSOC is often diagnosed in advanced stages and most patients will relapse after optimal first-line treatment. One reason for the lack of successful treatment in HGSOC is high tumor heterogeneity including differences across the tumors in distinct patients, and even within each tumor. This heterogeneity is the result of genetic and non genetic factors. Phenotypical variabilty exists also within cancer cells that have the same genetic background. This is due to the fact that a cell can exist in more than one stable state where its genome is in a specific configuration and it expresses certain genes. Diverse cell states and transitions between them initially offer a path for tumor development, and later enable essential tumor behavior, such as metastasis and survival in variable environmental pressures, such as those posed by anti-cancer therapies. Generally, phenotypic heterogeneity is acquired from the cell of origin for a tumor. This thesis studies cell states in HGSOC cancer cells and their normal counterparts, fallopian tube epithelial cells. Exploration of cell states is based on gene expression data of individual cells. Gene expression data was analyzed with state-of-the-art tools and computational methods. Gene modules representing cell states were constructed using genes found in differential gene expression analysis of cancer cells, normal cells and tumor microenvironment. Differentially expressed gene (DEG) groups of cancer, normal FTE and shared epithelial genes were grouped separately into gene modules based on gene-gene associations and community detection. Potential dynamical relationships between cell states were addressed by pseudo-temporal ordering using RNA velocity modeling approach. We are able to capture biologically meaningful cell states which are relevant in the development of HGSOC with chosen research strategy. Found cell states represent processes such as epithelial-mesenchymal transition, inflammation and stress response which are known to have a role in cancer development. The transition patterns showed consistent tendencies across the samples, and the trajectories for normal samples presented more directionality than those of cancer specimens. The results indicate existence of shared epithelial states which stay in fixed positions in the developmental trajectory of normal and cancer cells. For example, both epithelial stem cells and stem-like cancer cells seem to utilize oxidative phosphorylation (OXPHOS) for their metabolic needs. On the other hand, cell states that are more terminal showed higher activities of tumor necrosis factor alpha and Wnt/beta-catenin pathways that were both mutually exclusive with OXPHOS. Overall, this thesis presents a novel approach to study cell states the characterization of which is essential in understanding tumorigenesis and cancer cell plasticity.
  • Porttinen, Peter (2020)
    Computing an edit distance between strings is one of the central problems in both string processing and bioinformatics. Optimal solutions to edit distance are quadratic to the lengths of the input strings. The goal of this thesis is to study a new approach to approximate edit distance. We use a chaining algorithm presented by Mäkinen and Sahlin in "Chaining with overlaps revisited" CPM 2020 implemented verbatim. Building on the chaining algorithm, our focus is on efficiently finding a good set of anchors for the chaining algorithm. We present three approaches to computing the anchors as maximal exact matches: Bi-Directional Burrows-Wheeler Transform, Minimizers, and lastly, a hybrid implementation of the two. Using the maximal exact matches as anchors, we can efficiently compute an optimal chaining alignment for the strings. The chaining alignment further allows us to determine all such intervals where mismatches occur by looking at which sequences are not in the chain. Using these smaller intervals lets us approximate edit distance with a high degree of accuracy and a significant speed improvement. The methods described present a way to approximate edit distance in time complexity bounded by the number of maximal exact matches.
  • Haavisto, Noora (2019)
    Cities are facing pressure to overcome critical challenges that force us to rethink our unsustainable mobility patterns. Therefore, the transportation sector is going through major changes. Mobility as a Service (MaaS) is one of the innovations trying to change how we travel, a concept that originates from Finland. MaaS is a concept that brings all the transport providers and modes into one platform. A distinctive feature of MaaS is the possibility to buy tickets for the entire journey, removing the need to go through multiple websites and ticket schemes. However, MaaS is still an emerging concept and therefore it lacks official definition. Finland has been in the forefront of this transportation reform with new legislation that supports the creation of MaaS. The public sector has traditionally had a central role in the provision of transport services where regulation and subsidies are needed. However, the new legislation strongly advocates market-based services, and thus the public sector needs to reconsider their position. Therefore, it is important to understand how the Finnish public sector and the parties actually executing the law sees MaaS, its impacts and their role in MaaS. The thesis is qualitative in nature and 20 public sector representatives were interviewed from 17 different organizations. The organizations consist of governmental organizations, interest groups, regional organizations and cities that vary in size. The interview analysis has been guided by concept of emerging technology. Emerging technology is characterized of being technology that can change multiply sectors at the same time but simultaneously has not yet demonstrated its value. The results showed that there is big variety how public sector representatives define MaaS. Additionally, the respondents felt there is a lot of challenges related to MaaS, such as working business model, lack of services, technical challenges, area of demand among others. Positive side was if MaaS would make transport more efficient and provide savings for the public sector. User wise it was clear that MaaS needs to be effortless for the user in order to compete with private cars. Overall the respondents saw more opportunities for MaaS than possible negative effects, but the lack of widespread MaaS scheme makes it hard to evaluate any effects. However, MaaS raised also suspicions among some respondents. As for the legislation, it did not gather any positive feedback outside of government officials, especially the openness of the drafting process received criticism. The results also showed that there is contradicting view on the roles among the different groups of representatives. In conclusion it should be taken into consideration how future policies are formed as now the experienced exclusion of drafting the legislation might have hindered the cooperation and created suspicion towards the whole concept. Additionally, it is clear there is insecurities inside the public sector caused by uncertainties related to MaaS. Implementation has been slow since public sector feels the government has told them to do something, they do not have ability to do. Nevertheless, generally the public sector is still welcoming MaaS. Especially cities hoped that MaaS would enable them to cut their service in low dense areas. However, there is still no will to financially support MaaS, it seen that it is a job for private sector to take the risks.
  • Colliander, Camilla (2022)
    Software development speed has significantly increased in recent years with methodologies like Agile and DevOps that use automation, among other technics, to enable continuous delivery of new features and software updates to the market. This increased speed has given rise to concerns over guaranteeing security at such a pace. To improve security in today’s fast-paced software development, DevSecOps was created as an extension of DevOps. This thesis focuses on the experiences and challenges of organizations and teams striving to implement DevSecOps. We first view our concepts through existing literature. Then, we conduct an online survey of 37 professionals from both security and development backgrounds. The results present the participants’ overall sentiments towards DevSecOps and the challenges they struggle with. We also investigate what kind of solutions have been tried to mitigate these issues and if these solutions have indeed worked.
  • Lämsä, Suvi (2021)
    Urban environments are constantly changing and expanding. They grow, evolve, and adapt to society and residents’ needs. Environmental changes have an impact also on urban green such as trees. This is because the increase of building stock and expanding cityscape will target these green spaces. However, the significance of those green spaces is understood as they have a positive impact on the residents’ well-being and health. For example, urban trees are known to improve the air quality and to provide mentally relaxing environments for residents. As this importance is emphasized, changes in the areas must be monitored, which increases the importance of the change detection studies. Change detection is a comparison of two or more datasets from the same area but at different times. Principally, changes have been detected with various remote sensing methods, such as aerial- and satellite images, but as airborne laser scanning technology and multi-temporal laser scanning datasets have become more common, the use of laser scanning data has also increased. The advantage of the laser scanning method is especially in its ability to produce three-dimensional information of the area. Therefore, also vertical properties can be studied. The method’s advantage is its ability to detect changes in urban tree cover as well as in tree height. The aim of this study was to investigate how tree cover and especially canopy height have changed in the Kuninkaantammi area in Helsinki during 2008‒2015, 2015‒2017, 2017‒2020, and 2008‒2020 from multi-temporal laser scanning data. One of the starting points of this study was to find out how airborne laser scanning datasets with different sensors and survey parameters are suitable for change detection. Also, what kind of problems the differences between datasets will raise and how to reduce those problems. The study used laser scanning data from the National Land Survey of Finland and from the city of Helsinki for four different years. The canopy height models were produced of each dataset and changes were calculated as the difference of each canopy height model. The results show that multi-temporal laser scanning data require a lot of manual processing to create datasets comparable. The greatest problems were differences in point density and in classification of the data. The sparse data from the National Land Survey of Finland affected how changes were managed to be studied. Therefore, changes were detected only in general level. In addition, each dataset was classified differently which affected the usability of the classes in the datasets. The problems encountered were reduced by manual work like digitizing or by masking non-vegetation objects. The results showed that the change in the Kuninkaantammi area has been relatively large at the time of the study. Between 2008 and 2015, 12.1% of the tree cover was lost, 9.9% between 2015 and 2017, and 13.2% between 2017 and 2020. In addition, an increase in canopy height was detected. Between 2008 and 2015, 44.2% of the area had greater than 2 m increase in canopy height. Similarly, increase occurred in 11.1% and 3.5% of the area in 2015‒2017 and in 2017‒2020, respectively. Although the changes were observed at a general level, it can be concluded that the used datasets can provide valuable information about the changes in urban green that have taken place in the area.