Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Silvennoinen, Meeri (2022)
    Malaria is a major cause of human mortality, morbidity, and economic loss. P. falciparum is one of six Plasmodium species that cause malaria and is widespread in sub-Saharan Africa. Many of the currently used drugs for malaria have become less effective, have adverse effects, and are highly expensive, so new ones are needed. mPPases are membrane integral pyrophosphatases that are found in the vacuolar membranes of protozoa but not in humans. These enzymes pump sodium ions and/or protons across the membrane and are crucial for parasite survival and proliferation. This makes them promising targets for new drug development. In this study we aimed to identify and characterize transient pockets in mPPases that could offer suitable ligand binding sites. P. falciparum was chosen because of its therapeutical interest, and T. maritima and V. radiata were chosen because they are test systems in compound discovery. The research was performed using molecular modelling techniques, mainly homology modelling, molecular dynamics, and docking. mPPases from three species were used to make five different systems: P. falciparum (apo closed conformation), T. maritima (apo open, open with ligand, and apo closed) and V. radiata (open with ligand). P. falciparum mPPase does not have a 3D structure available, so a homology model was built using the closest structure available from V. radiata mPPase as a template. Runs of 100 ns molecular dynamics simulations were conducted for these five systems: monomeric mPPase from P. falciparum and dimeric mPPases for the others. Two representative 3D structures for each of the five trajectories, the most dissimilar one to another, were selected for further analysis using clustering. The scrutinized 3D structures were first analyzed to identify possible binding pockets using two independent methods, SiteMap and blind docking (where no pre-determined cavity is set for docking). A second set of experiments using different scores (druggability, enclosure, exposure, …) and targeted docking were then run to characterize all the located pockets. As a result, only half of the catalytic pockets were identified. None of the transient pockets were identified in P. falciparum mPPase and all of them were located within the membrane. Docking was performed using compounds that have shown inhibiting behavior in previous studies but did not give good results in the tested structures. In the end none of the transient pockets were interesting for further study.
  • Koskinen, Anssi (2020)
    The applied mathematical field of inverse problems studies how to recover unknown function from a set of possibly incomplete and noisy observations. One example of real-life inverse problem is image destriping, which is the process of removing stripes from images. The stripe noise is a very common phenomenon in various of fields such as satellite remote sensing or in dental x-ray imaging. In this thesis we study methods to remove the stripe noise from dental x-ray images. The stripes in the images are consequence of the geometry of our measurement and the sensor. In the x-ray imaging, the x-rays are sent on certain intensity through the measurable object and then the remaining intensity is measured using the x-ray detector. The detectors used in this thesis convert the remaining x-rays directly into electrical signals, which are then measured and finally processed into an image. We notice that the gained values behave according to an exponential model and use this knowledge to transform this into a nonlinear fitting problem. We study two linearization methods and three iterative methods. We examine the performance of the correction algorithms with both simulated and real stripe images. The results of the experiments show that although some of the fitting methods give better results in the least squares sense, the exponential prior leaves some visible line artefacts. This suggests that the methods can be further improved by applying suitable regularization method. We believe that this study is a good baseline for a better correction method.
  • Merikoski, Jori (2016)
    We study growth estimates for the Riemann zeta function on the critical strip and their implications to the distribution of prime numbers. In particular, we use the growth estimates to prove the Hoheisel-Ingham Theorem, which gives an upper bound for the difference between consecutive prime numbers. We also investigate the distribution of prime pairs, in connection which we offer original ideas. The Riemann zeta function is defined as ζ(s) := \sum_{n =1}^{∞} n^{-s} in the half-plane Re s > 1. We extend it to a meromorphic function on the whole plane with a simple pole at s=1, and show that it satisfies the functional equation. We discuss two methods, van der Corput's and Vinogradov's, to give upper bounds for the growth of the zeta function on the critical strip 0 ≤ Re s ≤ 1. Both of these are based on the observation that ζ(s) is well approximated on the critical strip by a finite exponential sum \sum_{n =1}^{T} n^{-s} = \sum_{n =1}^{T} exp\{ -s log n \}. Van der Corput's method uses the Poisson summation formula to transform this sum into a sum of integrals, which can be easily estimated. This yields the estimate ζ(1/2 + it) = \mathcal{O} (t^{\frac{1}{6}} log t), as t → ∞. Vinogradov's method transforms the problem of estimating an exponential sum into a combinatorial problem. It is needed to give a strong bound for the growth of the zeta function near the vertical line Re s = 1. We use complex analysis to prove the Hoheisel-Ingham Theorem, which states that if ζ(1/2 + it) = \mathcal{O} (t^{c}) for some constant c > 0, then for any θ > \frac{1+4c}{2+4c}, and for any function x^{θ} << h(x) << x, we have ψ (x+h) - ψ (x) ∼ h, as x → ∞. The proof of this relies heavily on the growth estimate obtained by the Vinogradov's method. Here ψ(x) := \sum_{n ≤ x} Λ (n) = \sum_{p^k ≤ x} log p is the summatory function of the von Mangoldt's function. From this we obtain by using van der Corput's estimate that the difference between consecutive primes satisfies p_{n+1} - p_{n} < p_{n}^{\frac{5}{8} + \epsilon} for all large enough n, and for any \epsilon > 0. Finally, we study prime pairs, and the Hardy-Littlewood Conjecture on their distribution. More precisely, let π _{2k}(x) stand for the number of prime numbers p ≤ x such that p+2k is also a prime. The following ideas are all original contributions of this thesis: We show that the average of π _{2k}(x) over 2k ≤ x^{θ} is exactly what is expected by the Hardy-Littlewood Conjecture. Here we can choose θ > \frac{1+4c}{2+4c} as above. We also give a lower bound of π _{2k}(x) for the averages over much smaller intervals 2k ≤ E log x, and give interpretations of our results using the concept of equidistribution. In addition, we study prime pairs by using the discrete Fourier transform. We express the function π _{2k}(n) as an exponential sum, and extract from this sum the term predicted by the Hardy-Littlewood Conjecture. This is interpreted as a discrete analog of the method of major and minor arcs, which is often used to tackle problems of additive number theory.
  • Kaipio, Mikko Ari Ilmari (2014)
    This master's thesis consists of two parts related to atomic layer deposition (ALD) processes: a literature survey of so-called ex situ in vacuo analysis methods used in investigations of the ALD chemistry and a summary of the work performed by the author using in situ methods. The first part of the thesis is divided into four sections. In the first two sections ALD as a thin film deposition method is introduced, and in situ and ex situ in vacuo publications related to ALD are summarized. The third section is a general overview of ex situ in vacuo analysis methods, and the final section a literature review covering publications where ex situ in vacuo techniques have been employed in studying ALD processes, with a strong emphasis on analysis methods which are based on the use of x-rays. The second part of the thesis consists of in situ quartz crystal microbalance and quadrupole mass spectrometry studies of the V(NEtMe)4/D2O, V(NEtMe)4/O3, Mg(thd)2/TiF4 and Cu2(CH3COO)4/D2O ALD processes. The experimental apparatus and related theory are given a brief overview, followed by a presentation and discussion of the results.
  • Rissanen, Olli (2014)
    Delivering more value to the customer is the goal of every software company. In modern software business, delivering value in real-time requires a company to utilize real-time deployment of software, data-driven decisions and empirical evaluation of new products and features. These practices shorten the feedback loop and allow for faster reaction times, ensuring the development is focused on features providing real value. This thesis investigates practices known as continuous delivery and continuous experimentation as means of providing value for the customers in real-time. Continuous delivery is a development practice where the software functionality is deployed continuously to customer environment. This process includes automated builds, automated testing and automated deployment. Continuous experimentation is a development practice where the entire R&D process is guided by conducting experiments and collecting feedback. As a part of this thesis, a case study is conducted in a medium-sized software company. The research objective is to analyze the challenges, benefits and organizational aspects of continuous delivery and continuous experimentation in the B2B domain. The data is collected from interviews conducted on members of two teams developing two different software products. The results suggest that technical challenges are only one part of the challenges a company encounters in this transition. For continuous delivery, the company must also address challenges related to the customer and procedures. The core challenges are caused by having multiple customers with diverse environments and unique properties, whose business depends on the software product. Some customers also require to perform manual acceptance testing, which slows down production deployments. For continuous experimentation, the company also has to address challenges related to the customer and organizational culture. An experiment which reveals value for a single customer might not reveal as much value for other customers due to unique properties in each customers business. Additionally, the speed by which experiments can be conducted is relative to the speed by which production deployments can be made. The benefits found from these practices support the case company in solving many of its business problems. The company can expose the software functionality to the customers from an earlier stage, and guide the product development by utilizing feedback and data instead of opinions.
  • Koutsompinas, Ioannis Jr (2021)
    In this thesis we study extension results related to compact bilinear operators in the setting of interpolation theory and more specifically the complex interpolation method, as introduced by Calderón. We say that: 1. the bilinear operator T is compact if it maps bounded sets to sets of compact closure. 2.\bar{ A} = (A_0,A_1) is a Banach couple if A_0,A_1 are Banach spaces that are continuously embedded in the same Hausdorff topological vector space. Moreover, if (Ω,\mathcal{A}, μ) is a σ-finite measure space, we say that: 3. E is a Banach function space if E is a Banach space of scalar-valued functions defined on Ω that are finite μ-a.e. and so that the norm of E is related to the measure μ in an appropriate way. 4. the Banach function space E has absolutely continuous norm if for any function f ∈ E and for any sequence (Γ_n)_{n=1}^{+∞}⊂ \mathcal{A} satisfying χ_{Γn} → 0 μ-a.e. we have that ∥f · χ_{Γ_n}∥_E → 0. Assume that \bar{A} and \bar{B} are Banach couples, \bar{E} is a couple of Banach function spaces on Ω, θ ∈ (0, 1) and E_0 has absolutely continuous norm. If the bilinear operator T : (A_0 ∩ A_1) × (B_0 ∩ B_1) → E_0 ∩ E_1 satisfies a certain boundedness assumption and T : \tilde{A_0} × \tilde{B_0} → E_0 compactly, we show that T may be uniquely extended to a compact bilinear operator T : [A_0,A_1]_θ × [B_0,B_1]_θ → [E_0,E_1]_θ where \tilde{A_j} denotes the closure of A_0 ∩ A_1 in A_j and [A_0,A_1]_θ denotes the complex interpolation space generated by \bar{A}. The proof of this result comes after we study the case where the couple of Banach function spaces is replaced by a single Banach space.
  • Vazquez Muiños, Henrique (2016)
    In this thesis we consider an extension of the Standard Model (SM) with a SU(2) symmetric Dark Sector, and study its viability as a dark matter (DM) model. In the dark sector, a hidden Higgs mechanism generates three massive gauge bosons, which are the DM candidates of the model. We allow a small coupling between the SM Higgs and the scalar of the dark sector, such that there is a scalar mixing. We study the new interactions in the model and analyse the consequences of the scalar mixing: new possible decays of the Higgs into DM, Higgs decay rates and production cross sections different from SM predictions, and possible interactions between DM and normal matter. We study the evolution of the DM abundance from the early universe to the present and compare the relic densities that the model yields with the experimental value measured by the Planck satellite. We compute the decay rates for the Higgs in the model and test if they are consistent with the experimental data from Atlas, CMS and Tevatron. We calculate the cross section for the interaction between DM and normal matter and compare it with the data from the latest direct detection experiments LUX and XENON100. We discuss the impact of the experimental constraints on the parameter space of the model, and find the regions that give the best fit to the experimental data. In this work we show that the agreement with the experiments is optimal when both the DM candidates and the dark scalar are heavier than the Higgs boson.
  • Laurila, Terhi (2016)
    An intense storm named Mauri swept over Lapland, Finland on the 22nd of September 1982 causing 3 Mm3 forest damage and two fatalities. There were thoughts that Mauri originated from a category 4 hurricane Debby but the linkage between Debby and Mauri and their connection to climatic conditions have not been investigated before. In this thesis, a climatic overview of September 1982 in comparison to 1981-2010 Septembers is provided. The calculations are based on ERA-Interim reanalysis data produced by European Centre for Medium-Range Weather Forecasts. The track of the storm is determined from ERA-Interim data from the time Debby occurred until Mauri crossed Finland. The evolution of Debby is also presented with the storm track data by National Oceanic and Atmospheric Administration to comparison. Extratropical transition (ET) and phase diagram of Debby and the synoptic evolution of Mauri are examined. ET is defined to start when the cyclone loses a symmetric hurricane eye feature to form asymmetric fronts, and ET is completed when the warm core of the storm turns cold. A comparison between Mauri and two other intense storms that have affected Europe is briefly presented. It was discovered, that Debby completed ET before rapidly crossing the North Atlantic. However, near the UK ex-Debby started to lose its cold core and asymmetric structure typical to an extratropical cyclone. Ex-Debby phased back to warm cored while crossing Sweden, and at the same time it was rapidly deepening up to 27 hPa in 24 hours defining the storm as a meteorological bomb. Ex-Debby developed a frontal structure along a pre-existing cold front before hitting Lapland. It merged with the pre-existing low pressure center from the Norwegian Sea and proceeded right ahead of an upper trough, a region for cyclogenesis. These made the storm, now named Mauri, more intense as it crossed Lapland, and led to 30 m/s winds based on Finnish Meteorological Institute. Meanwhile, an occluded bent-back front approached Mauri, wrapped around the storm trapping the warmer air inside it and formed a warm seclusion. Due to that, Mauri regained the symmetric structure before reaching the Barents Sea. Examining the climatic aspect, positive surface pressure and temperature anomalies over central Europe caused the jet stream to shift northward. Also, positive NAO and AO phases changed the storm track in general to higher latitudes. Hence, climatic conditions favoured the storm track to move more north. The results of this thesis suggested that Mauri was the remnant of a hurricane Debby. It was shown that ERA-Interim was successful in locating the evolution of a cyclone and analysing its structure whereas it underestimated the surface pressure and wind speed values. Future work is still needed, for instance comparing these results to different reanalyses and collecting a statistic examination of hurricane originated storms in Europe, in order to adapt these methods and climatic indicators to future cases and storm predictions.
  • Ryyppö, Timo (2012)
    Islannissa purkautui Eyjafjallajökull-niminen tulivuori 14.4.2010. Purkauksen voimakkuus ja siitä syntynyt tuhkapilvi pysäytti lähes koko Euroopan lentoliikenteen Suomi mukaan lukien. Tässä pro gradu -tutkielmassa kerrotaan, miksi juuri Islannissa purkautui tulivuori, ja mitä vaikutuksia purkauksella oli. Tutkielma esittelee erilaisia menetelmiä havainnoida ja mallintaa tuhkan kulkeutumista. Tulkitsemalla ja yhdistämällä erilaisten mallien ja menetelmien tuloksia pyritään vastaamaan kysymykseen: oliko Suomessa tuhkaa vai ei? Tutkielma painottuu kaukokartoitusmittauksiin sekä satelliiteista että maanpinnalta. Satelliittiaineisto on peräisin Ilmatieteen laitoksen (IL) Lapin ilmatieteellisessä tutkimuskeskuksessa vastaanotettavista satelliiteista. Käytetyt satelliitti-instrumentit ovat MODIS (Moderate Resolution Imaging Spectroradiometer) ja OMI (Ozone Monitoring Instrument). Maanpinnalta tehdyt kaukokartoitushavainnot on tehty Brewer-spektrofotometrilla ja PFR-aurinkofotometrillä (Precision Filter Radiometer). PFR-mittauksia on sekä Sodankylästä että Jokioisista ja Brewer-mittauksia Sodankylästä. Kaukokartoitusaineiston lisäksi tutkielmassa käytetään kahta numeerista mallia: IL:n SILAMia (System for Integrated modeLling of Atmospheric coMposition) sekä UK Met Officen NAMEa (Numerical Atmospheric-dispersion Modelling Environment). Näiden lisäksi IL:n rikkidioksidimittausverkostoa käytetään sekä vertailussa satellittiaineiston kanssa että trajektorianalyysissä. Tutkielmassa käytettiin erilaisia lähestymistapoja tuhkan olemassaolon ja määrän selvittämiseen. Minkään yksittäisen mittauksen tai mallin perusteella ei kuitenkaan voida varmasti sanoa, oliko Suomessa vulkaanista tuhkaa vai ei keväällä 2010. Tätä voidaan pitää tutkielman tärkeimpänä johtopäätöksenä. Tulivuorten purkausten havainnoimisessa ja monitoroinnissa onkin tärkeää yhdistää erilaisten mittausten, ja mallien tuloksia ja tarkastella näitä suurena kokonaisuutena.
  • Zhu, Lin (2016)
    Hydrogels are promising biomaterials for tissue engineering. Concerning hydrogels chemical structures, the hydrogen bonding towards water makes them hydrophilic compounds. Hydrogels contain 95% ~ 99% water as the swelling agent and have the characteristics of extracellular matrix. Therefore, they are suitable for cell growth and appropriate for forming cell culture. Hydrogels can mimic the cell microenvironments and promote cell differentiation by interactions with cells. Cells can get oxygen, nutrients exchange as well as removal of metabolic waste to live. Hydrogels can be categorized into natural hydrogels, synthetic hydrogels and hybrid hydrogels by sources. Agarose, Collagen and Calcium alginate are the most popular natural hydrogels. Polyethylene (glycol) and its derivative Polyethylene (glycol) Diacrylate (PEGDA) are indispensable synthetic hydrogels. In this thesis, hydrogels are studied for their chemical structure, physical and mechanical properties and gel formation. Typical hydrogels, i.e. agarose, polyethylene (glycol) diacrylate (PEGDA), collagen and calcium alginate, are reviewed for their methodology of formation, mechanical properties and applications. Since hydrogel is a solid containing a given amount of water, it has viscoelasticity. Rheology test mechanism is described for viscoelastic materials. Micropatterning methods of hydrogels are investigated in variety of approaches. How the patterned surfaces affect cell behaviour is discussed in our literature review. From the experimental results, agarose and polyethylene (glycol) diacrylate are successfully fabricated and their micropatterned hydrogels show promising properties. In addition, impact of mechanical properties, such as water diffusion in hydrogels, how temperatures influence hydrogel structures and durability of the structures storage are investigated. Hydrogel viscoelasticities are measured by rheometer. Hydrogels are also tested in chips and cell wells for future cell growth study. Finally, this research has successfully fabricated the 3D micropatterned hydrogels for cell culture.
  • Uusikorpi, Juuso (2020)
    The geochemical regolith data gathered from Dzhumba, a gold prospect in eastern Kazakhstan, was analyzed using factor analysis and then integrated into ArcGIS as spatial data. Principal axis factoring method was used for factor extraction combined with varimax orthogonal rotation and Kaiser normalization. Five clear factors were extracted from the data set of 47 elements in 3942 regolith samples. Kriging interpolation was used to generate spatial data surfaces from factor scores. The generated factors are composed of the geochemical associations in the raw data, and represent the underlying geological processes and formations of the area. The fourth factor generated represents gold mineralization with As, Sb, Au, Zr, Sc, Mn, Mo, Cu, K and Ni being the elements that are positively loaded onto factor 4. Therefore, single element maps of these elements have been produced alongside the factor maps in order to examine factor 4 more intensely. Also maps about structural geology and alteration in the Dzhumba project area have been produced in order to give better understanding of the factor maps. The data suggests that the deposit type is an orogenic gold deposit. Other factors created interesting results as well, and they gave information about the different geological units of the area. Factor 1 represents granitic rocks by their feldspar and trace element content, factor 2 represents black shales with possible mafic rock constituents, factor 3 represents a sulfide rich mafic mineral group or graphitic rocks that are most likely black shales and factor 5 possibly represents calcite alteration. Factor 4 is the main interest of this study. The most intense loadings for factor 4 are in Brigadnoe, Svistun and Dzhumba with a small peak in Belyi. Single element map for gold mostly corresponds to factor 4 for Svistun and Dzhumba, but Brigadnoe is represented with a small peak. However, gold has a major presence in Fedor-Ivanovskoe, which is absent from factor 4. Further exploration in Fedor-Ivanovskoe could be performed in order to clarify if this is due to an unrelated gold-only deposit or some other event. Possible future exploration in the area could benefit from factor 4 results, using As and Sb, or a combination of As, Sb, Zr, Sc, Mn, Mo, Cu, K and Ni as pathfinders for possible gold occurrences.
  • Tilander, Vivianna (2023)
    Context: An abundance of research on the productivity of software development teams and developers exists identifying many factors and their effects in different contexts and concerning different aspects of productivity. Objective: This thesis aims to collect and analyse existing recent research results of factors that are related to or directly influence the productivity of teams or developers, how they influence it in different contexts and briefly summarise the metrics used in recent studies to measure productivity. Method: The method selected to reach for these aims was to conduct a systematic literature review on relevant studies published between 2017 and 2022. Altogether, 48 studies were selected and analysed during the review. Results: The metrics used by the reviewed studies for measuring productivity range from time used for completing a task to self-evaluated productivity to the amount of commits contributed. Some of these are used by multiple studies and many by only one or a few and measure productivity from different angles. Various factors were found and these range from team size to experienced emotion to working from home during the COVID-19 pandemic. The relationships found between these factors and some aspects of the productivity of developers and teams range from positive to negative and sometimes both depending on the context and the productivity metric in question. Conclusions: While many relationships were found between various factors and the productivity of software developers and development teams in this review, these do not cover all possible factors, relationships or measurable productivity aspects in all possible contexts. Additionally, one should keep in mind that most of the found relationships do not imply causality.
  • Saarinen, Tuomo (2020)
    The use of machine learning and algorithms in decision making processes in our every day lifehas been growing rapidly. The uses range from bank loans and taxation to criminal sentencesand child care decisions. Because of the possible high importance of such decisions, we need tomake sure that the algorithms used are as unbiased as possible.The purpose of this thesis is to provide an overview of the possible biases in algorithm assisteddecision making, how these biases affect the decision making process, and go through someproposes on how to tackle these biases. Some of the proposed solutions are more technical,including algorithms and different ways to filter bias from the machine learning phase. Othersolutions are more societal and legal and address the things we need to take into account whendeciding what can be done to reduce bias by legislation or by enlightening people on the issuesof data mining and big data.
  • Hård, Petri (2017)
    The aim of this study is to find out about away trips that Finnish ice hockey supporters make within Finland. The idea is also to find out if destination cities of the trips could better benefit from traveling hockey fans. The study aims at finding the basic frame by which the fan organizations choose their destinations, the motives of those who participate and visitors' perceptions of services available at the destination ice halls. Both qualitative and quantitative methods are used. Not many earlier studies about the topic exist. Literature about sport tourism concentrates mostly on people doing the sports themselves. Academic literature about sport fans, in turn, is usually about the psychological side of fandom. Several studies of Finnish ice hockey fans exist though. Also their point of view is often psychological. Many of Finnish studies are also only thesis-level works. Because of lack of earlier research about the topic, this work can be seen as a baseline research. Fan organizations choose their travel destinations mostly based on game schedule. Trips are mainly made to games played on Saturdays. On other days of the week the destination should be within a short distance. Distance to destination isn't very important on Saturdays unless the destination is very far away. Visiting fans don't spend much time at the destination city on a regular trip. Usually the fans enter the ice hall straight after stepping out of the bus and return soon after the game. However, overnight trips might interest the fans especially if the destination is far away. Previous experience about destinations also affects the choices fan organizations make. Most important reasons for participating in a trip are especially seeing the favorite team play and supporting the team. Traveling itself as a process isn't an important factor while making the decision to travel yet there could be interest to go on overnight trips more often than fans currently do. Also company affects travel decisions as people prefer going to games with friends or acquaintances. SM-Liiga ice halls seem to have all the different service types away trippers need. There's not much demand on a variety of services as visiting fans mostly buy just drinks of food. Supply of these services is good but visitors are less happy with quality, variety and price of the products sold. An important factor for game experience are the seating arrangements at the ice hall. They were found to correlate with happiness with overall game experience. Ice halls are considered safe and security works well. All in all visiting fans are happier with the service they receive at the ice halls than with the services itself. To improve their service in the eyes of visiting fans, the hosts should pay attention to variety of food and drinks and offer visitors seats that are suitable for their needs. Host organizations and local stakeholders could benefit from offering visiting fans moderately priced packages that could include for example transportation, game ticket, a meal and accommodation or some of these services. This way they could get visitors spend more money at the destination city and at the same time income would spread to a larger number of stakeholders.
  • Paavilainen, Topi (2018)
    Minimum-cost minimum path cover is a graph-theoretic problem with an application in gene sequencing problems in bioinformatics. This thesis studies decomposing graphs as a preprocessing step for solving the minimum-cost minimum path cover problem. By decomposing graphs, we mean splitting graphs into smaller pieces. When the graph is split along the maximum anti-chains of the graph, the solution for the minimum-cost minimum path cover problem can be computed independently in the small pieces. In the end all the partial solutions are joined together to form the solution for the original graph. As a part of our decomposition pipeline, we will introduce a novel way to solve the unweighted minimum path cover problem and with that algorithm, we will also obtain a new time/space tradeoff for reachability queries in directed acyclic graphs. This thesis also includes an experimental section, where an example implementation of the decomposition is tested on randomly generated graphs. On the test graphs we do not really get a speedup with the decomposition compared to solving the same instances without the decomposition. However, from the experiments we get some insight on the parameters that affect the decomposition's performance and how the implementation could be improved.
  • Kilpinen, Arttu (2022)
    The objective of the shortest common superstring problem is to find a string of minimum length that contains all keywords in the given input as substrings. Shortest common superstrings have many applications in the fields of data compression and bioinformatics. For example, a common superstring can be seen as a compressed form of the keywords it is generated from. Since the shortest common superstring problem is NP-hard, we focus on the approximation algorithms that implement a so-called greed heuristic. It turns out that the actual shortest common superstring is not always needed. Instead, it is often enough to find an approximate solution of sufficient quality. We provide an implementation of the Ukkonen's linear time algorithm for the greedy heuristic. The practical performance of this implementation is measured by comparing it to another implementation of the same heuristic. We also hypothesize that shortest common superstrings can be potentially used to improve the compression ratio of the Relative Lempel-Ziv data compression algorithm. This hypothesis is examined and shown to be valid.
  • Takala, Saara (2024)
    Ultra-low frequency (ULF) waves in the Pc4-Pc5, 2 – 25 mHz range have been observed to accelerate trapped 1 – 10 MeV electrons in the Earth’s radiation belts. This acceleration can lead to particle losses and injections that occur on timescales comparable to the particle drift periods. Current models rely on diffusion equations written in terms of Fokker-Planck equations and are not suitable for describing fast temporal variations in the distribution function. This thesis is a study of fast transport of equatorially trapped electrons in the radiation belts. We look at solutions for the time evolution of the linear part of the perturbed distribution function using both analytical and numerical methods. Based on this work we build a simple model of fast transport in the radiation belts using a spectral PDE framework called Dedalus. The resulting program is a computationally inexpensive, simple approach to modelling drift-periodic signatures on fast timescales. In this study we investigate the behavior of the distribution function in three systems: a simple system without a wave term, and systems with a single non-resonant and resonant ULF wave. The wave solutions are evaluated with magnetic field perturbations of different magnitudes. The Earth’s magnetic field is modelled with the Mead field. The numerical solution of the perturbed differential equation is studied for relativistic equatorially trapped electrons. Phase-mixing is found to happen regardless of field fluctuations or resonance. The non-resonant wave solution shows time-delayed, spatially localized structures in the equatorial plane forming in the presence of large magnetic field fluctuations. These transients are also seen in the analytical solution and provide a new theoretical explanation for the ubiquitous observation of drift echoes in the inner and outer radiation belts of the Earth (Li et al., 2024).
  • Tamminen, Juuda (2021)
    This master’s thesis is an ethnographic study about everyday urban encounters and social interaction. It explores how residents in the suburban housing estate of Kontula in East Helsinki negotiate social and cultural difference in their everyday lives. The study focuses on the semi-public spaces of the local shopping centre and examines residents’ capacity to live with difference. The study contributes to a multi-vocal and historically informed understanding of the processes that shape the social landscapes of a socially mixed and multi-ethnic neighbourhood. The study is based on fieldwork carried out in two phases between August 2019 and February 2020. The study applies anthropological methods of participant observation and qualitative interviews. The eleven research participants are adults between the ages of 30 and 71 who live in the neighbourhood and have extensive personal experience of the shopping centre. Although the interviews were a crucial aspect of the meaning-making process, the study relies primarily on participant observation in constructing an interpretation and analysis of social interaction at an intimate scale. In order to contextualise everyday encounters at the shopping centre, this thesis assesses how Kontula, as a stigmatised territory in the urban margins, encapsulates a complex interplay between moral claims of a “good” and “bad” neighbourhood. While some residents confirm negative stereotypes about the shopping centre and bring attention to local social problems and issues of unsafety, others downplay these problems and instead emphasise how tolerant and sociable the shopping centre is. Observations of stigmatised territories reveal how the participation of marginalised individuals and ethnic minorities at the shopping centre challenges the processes and discourses that constitute them as objects of fear and nuisance. The concepts of conviviality and cosmopolitan canopies are used to analyse local social interactions. The analysis suggests that the capacity to live with difference is enabled by ordinary meeting places, such as pubs and cafés, where residents come into regular social contact and engage with diverse individuals and groups. While the maintenance of ethnic boundaries remains salient in the way residents negotiate the social landscapes, these ordinary spaces of encounter situationally reconfigure categories of “us” and “them” and thus expand local meanings of who belongs. The analysis concludes that the contested meanings of belonging and the everyday negotiation of difference are attributes of an open multi-ethnic society coming to terms with difference and change. The analysis suggests that an equal right to participate and interact in shared urban spaces, rather than community consensus, is the hallmark of a society’s capacity to live with difference.
  • Ollinaho, Pirkka (Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2010)
    Sea-surface wind observations of previous generation scatterometers have been successfully assimilated into Numerical Weather Prediction (NWP) models. Impact studies conducted with these assimilation implementations have shown a distinct improvement to model analysis and forecast accuracies. The Advanced Scatterometer (ASCAT), flown on Metop-A, offers an improved sea-surface wind accuracy and better data coverage when compared to the previous generation scatterometers. Five individual case studies are carried out. The effect of including ASCAT data into High Resolution Limited Area Model (HIRLAM) assimilation system (4D-Var) is tested to be neutral-positive for situations with general flow direction from the Atlantic Ocean. For northerly flow regimes the effect is negative. This is later discussed to be caused by problems involving modeling northern flows, and also due to the lack of a suitable verification method. Suggestions and an example of an improved verification method is presented later on. A closer examination of a polar low evolution is also shown. It is found that the ASCAT assimilation scheme improves forecast of the initial evolution of the polar low, but the model advects the strong low pressure centre too fast eastward. Finally, the flaws of the implementation are found small and implementing the ASCAT assimilation scheme into the operational HIRLAM suite is feasible, but longer time period validation is still required.
  • Pelttari, Hannu (2020)
    Federated learning is a method to train a machine learning model on multiple remote datasets without the need to gather the data from the remote sites to a central location. In healthcare, gathering the data from different hospitals into a central location can be a difficult and time-consuming task, due to privacy concerns and regulations regarding the use of sensitive data, making federated learning an attractive alternative to more traditional methods. This thesis adapted an existing federated gradient boosting model and developed a new federated random forest model and applied them to mortality prediction in intensive care units. The results were then compared to the centralized counterparts of the models. The results showed that while the federated models did not perform as well as the centralized models on a similar sized dataset, the federated random forest model can achieve superior performance when trained on multiple hospitals' data compared to centralized models trained on a single hospital. In scenarios where the centralized models had data from multiple hospitals the federated models could not perform as well as the centralized models. It was also found that the performance of the centralized models could not be improved with further federated training. In addition to practical advantages such as possibility of parallel or asynchronous training without modifications to the algorithm, the federated random forest performed better in all scenarios compared to the federated gradient boosting. The performance of the federated random forest was also found to be more consistent over different scenarios than the performance of federated gradient boosting, which was highly dependent on factors such as the order with the hospitals were traversed.