Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Kaupinmäki, Santeri (2018)
    The fundamental building blocks of quantum computers, called qubits, can be physically realized through any quantum system that is restricted to two possible states. The power of qubits arises from their ability to be in a superposition of these two states, allowing for the development of quantum algorithms that are impossible for classical computers. However, interactions with the surrounding environment destroy the superposition in a process called decoherence, which makes it important to find ways to model these interactions and mitigate them. In this thesis we derive a non-Markovian master equation for the spin-boson model, with a time-dependent two-level system, using the reaction coordinate representation. We show numerically that in the superconducting qubit regime this master equation maintains the positivity of the density operator for relevant parameter ranges, and is able to model non-Markovian effects between the system and the environment. We also compare the reaction coordinate master equation to a Markovian master equation with parameters taken from real superconducting qubits. We demonstrate that the Markovian master equation fails to capture the system–bath correlations for short times, and in many cases overestimates relaxation and coherence times. Finally, we test how a time-dependent bias affects the evolution of the two-level system. The bias is assumed to be constant with an additive term arising from an externally applied time-dependent plane wave control field. We show that an amplitude, angular frequency, and phase shift for the plane wave can be chosen such that the control field improves the coherence time of the two-level system.
  • Virtanen, Tomi (2016)
    Nykyaikaiset sovellukset ovat yhä useammin verkkosovelluksia. Verkkosovellukset eivät kuitenkaan toimi perinteisten sovellusten tapaan yhdellä päätelaitteella vaan ne muodostuvat sovelluksia käyttävistä asiakkaista ja asiakkaita palvelevista palvelimista. Verkkosovellusten toimintaympäristö on sellainen, jossa sovellusten toiminta on hajautettu verkon yli. Tällöin tiedon synkronointi asiakkaiden ja palvelinten välillä muodostuu keskeiseksi. Muutokset datassa kulkevat tapahtumina, joiden hallinta on tärkeää sovelluksen toiminnan kannalta. Tapahtumien hallinnan logiikan toteuttaminen virheettömästi osoittautuu usein vaikeaksi. Tutkielmassa käsitellään reaktiivista ohjelmointia verkkosovellusten palvelinpuolella. Reaktiivisen ohjelmoinnin avulla pyritään ratkaisemaan tapahtumien hallinnan ongelmat. Tapahtumien hallinta kärsii usein monimutkaisista riippuvuuksista, vaikeasti ymmärrettävyydestä ja huonosta testattavuudesta. Reaktiivisen ohjelmoinnin avulla pyritään selkeyttämään sovellusten rakennetta. Tutkielmaa varten toteutettiin yksinkertainen verkkosovellus, jossa käyttäjät voivat keskustella sovelluksen välityksellä reaaliaikaisesti keskenään. Käyttäjät voivat lähettää toisille käyttäjille sekä yksityisviestejä että kirjoittaa kaikkien käyttäjien nähtäväksi tarkoitettuja viestejä. Verkkosovelluksen palvelinpuoli toteutettiin kahden reaktiivisen kirjaston avulla, sekä vertailun vuoksi ei-reaktiivisesti. Reaktiivisten versioiden toteutukseen käytettiin RxJs- ja Bacon.js-kirjastoja. Reaktiivisen ohjelmoinnin vaikutusta palvelinohjelmointiin tutkittiin staattisella lähdekoodin analyysillä. Tulosten perusteella reaktiivinen ohjelmointi lisää hieman lähdekoodin kokoa, mutta samalla vähentää sen kompleksisuutta.
  • Chydenius, Arto (2014)
    Reaktiivisen ohjelmoinnin on esitetty yksinkertaistavan sen avulla toteutettavia sovelluksia. Ohjelmointiparadigman on myös esitetty soveltuvan erityisen hyvin web-sovellusten selainpuolen ohjelmointiin. Tässä pro gradussa tutkitaan, mitkä ovat reaktiivisen ohjelmoinnin vaikutukset lähdekoodin kokoon ja monimutkaisuuteen. Tutkimusta varten on suunniteltu web-sovellus, josta on toteutettu kolme eri versiota: kaksi reaktiivista ja yksi ei-reaktiivinen versio. Versioiden lähdekoodia on vertailtu staattisen analyysin menetelmin sekä tutkimalla, miten jonkin toiminnallisuuden toteuttaminen tapahtuu jokaisen toteutusversion kohdalla.
  • Hiilesmaa, Ilana (2020)
    The class of RS Canum Venaticorum (RS CVn) variables are rapidly rotating, close detached Chromospherically Active Binary Stars (CABS). Their spectra show strong Ca II H and K emission lines, which indicate the presence of solar type chromospheric activity. Observed amplitudes of brightness variations in RS CVn stars are caused by large starspots. Their orbital periods are typically few days. EI Eridani (EI Eri) is an active, rapidly rotating (v sin i = 51 ± 0.5 km/s) binary star that belongs to the class of RS CVn variables. The primary component is a subgiant star with spectral type G5 IV. Its rotation and orbital motion are synchronised, i.e. P_rot = P_phot ≈ P_orb. We analyse 30 years of standard Johnson V differential photometry of EI Eri. The data were obtained with the Tennesee State University’s T3 0.4-meter Automatic Photometric Telescope (APT). We analyse the data with a new two-dimensional period finding method formulated by Jetsu (2019). This new method allows us to detect the real light curves of long-lived starspots of EI Eri. We also solve the parameters of these real light curves: periods, amplitudes and minimum epochs. Our analysis shows that the parameters of these real light curves are connected to long-lived starspots. There are also spatial correlations between these real light curve parameters. We detected two different period level starspot groups, P_1 ≈ 1.915920 ± 0.000079 days and P_2 ≈ 1.9472324 ± 0.0000040 days, rotating on the surface of EI Eri. The faster rotating starspots P_1 are non-stationary and the slower rotating starspots P_2 are stationary in the orbital reference of frame. The slower rotating starspots are at the longitudes coinciding with the line connecting the centres of the members of EI Eri. The slower rotating starspots have larger amplitudes than the faster rotating starspots. Hence, the slower rotating starspots are dominating the observed light curves. Our results show that the hypothesis, where the observed light curve is the sum of the real light curves (Jetsu, 2019), is valid for EI Eri. We can also show that the starspots of EI Eri are dark. The traditional one-dimensional period finding methods have given spurious results, like the rapid rotation period changes of starspots or abrupt longitudinal 180 degrees shifts of activity. Because of the short lap cycle period P_lap = 119.14 ± 0.30 days between the slower and the faster rotating starspots of EI Eri, the light curves have previously been misinterpreted.
  • Auno, Sami (2018)
    Chemical Exchange Saturation Transfer (CEST) is a novel Magnetic Resonance Imaging (MRI) technique that utilises exchange reactions between metabolites and tissue water to map metabolite concentration or tissue pH noninvasively. Similarly to Magnetic Resonance Spectroscopy (MRS), CEST is able to detect many endogenous metabolites, but unlike MRS, CEST is based on imaging and thus enjoys the speed of modern MR imaging. On the other hand, CEST also suffers from the same difficulties as MRI and MRS. One of the most common source of image artifacts in MRI is subject motion during imaging. Many different motion correction methods have been devised. Recently, a novel real-time motion correction system was developed for MRS. This method is based on volumetric navigators (vNav) that are performed multiple times interleaved with the parent measurement. Navigator image comparison, affine matrix calculation, and acquisition gradient correction to correct the field of view to match subject head motion are done online and in real-time. The purpose of this thesis is to implement this real-time motion correction method to CEST-MRI and study its efficacy and correction potential in phantoms and in healthy volunteers on 7T MR scanner. Additionally, it is hypothesised that the vNav images may be used to correct for motion related receiver sensitivity (B1-) inhomogeneities. Glutamate was chosen as the metabolite of interest due to it being the most abundant neurotransmitter in the human brain and due to its involvement in both normal cognitive function as well as many brain pathologies. Since glutamate has an amine group, it undergoes chemical exchange with water and is thus a usable metabolite for CEST imaging. A glutamate phantom was constructed to show the glutamate concentration sensitivity of CEST and to test and optimise the CEST sequence. Seven healthy volunteers were imaged over a period of two months. All but one volunteer were imaged more than once (2-4 times). Subjects were measured without voluntary head motion and with controlled left-right and up-down head movements. All measurements were performed with and without motion correction to test the motion and B1- -correction methods. Additionally, three volunteers were measured with a dynamic CEST experiment to assess the reproducibility of CEST. The real-time motion correction method was found to be able to correct for small, involuntary head movements. 18 % of the CEST maps measured without motion correction were found to have motion artifacts whereas the equivalent number for maps with motion correction was 0 % (4/22 maps versus 0/18 maps). Larger (>0.7◦ or >0.7 mm in one coregistration step), voluntary head movements could not be corrected adequately. The vNav images could be used to correct for B1- -inhomogeneities. This was found to improve CEST spectra quality and to remove lateral inhomogeneities from the CEST maps. The reproducibility of the CEST-MRI could not be established, however dynamic CEST measurements were found to be stable with only small contrast fluctuation of 4 % between consecutive maps due to noise.
  • Hyytiälä, Otto (2021)
    Remote sensing satellites produce massive amounts of data of the earth every day. This earth observation data can be used to solve real world problems in many different fields. Finnish space data company Terramonitor has been using satellite data to produce new information for its customers. The Process for producing valuable information includes finding raw data, analysing it and visualizing it according to the client’s needs. This process contains a significant amount of manual work that is done at local workstations. Because satellite data can quickly become very big, it is not efficient to use unscalable processes that require lot of waiting time. This thesis is trying to solve the problem by introducing an architecture for cloud based real-time processing platform that allows satellite image analysis to be done in cloud environment. The architectural model is built using microservice patterns to ensure that the solution is scalable to match the changing demand.
  • Kääriäinen, Kristiina (2016)
    In 2009, mining company Anglo American found a significant Ni-Cu-PGE deposit in Sodankylä, Finnish Lapland. The deposit is located underneath Viiankiaapa mire and has later been named Sakatti. During the 1970s and 1980s, the Geological Survey of Finland carried out a targeting till geochemistry program that covered most of Finnish Lapland. The ore potential of Viiankiaapa was not recognized in the original research report from the area. The targeting till geochemistry dataset is an example of the vast amount of existing geological data that is publicly available but has not been widely used. The targeting till geochemistry results from Viiankiaapa area were reanalysed using modern methods to find out whether they contain any indications of the Sakatti deposit. Principal component analysis, k-means clustering and element ratios formed an effective combination to recognize the potentially mineralized samples. Self-organizing maps would have benefited from more detailed data. All methods were used first for the targeting till geochemistry data and then for combined datasets that included information about the Sakatti discovery. Clear indications of the Sakatti deposit were found in till samples adjacent to the known ore outcrops, where the samples had high Ni concentration and their element ratios were similar to the ore. The most significant limitation of the targeting till geochemistry data is the lack of stratigraphic information. The problem could be partly overcome by using recent stratigraphical interpretations from a different study. Even considering the weaknesses of the targeting till geochemistry dataset, results from Viiankiaapa show that it contains valuable exploration potential. The dataset could be used in ore prospecting surveys elsewhere to point out the most promising targets.
  • Lehtonen, Tuomo (2019)
    Formal argumentation is a vibrant research area within artificial intelligence, in particular in knowledge representation and reasoning. Computational models of argumentation are divided into abstract and structured formalisms. Since its introduction in 1995, abstract argumentation, where the structure of arguments is abstracted away, has been much studied and applied. Structured argumentation formalisms, on the other hand, contain the explicit derivation of arguments. This is motivated by the importance of the construction of arguments in the application of argumentation formalisms, but also makes structured formalisms conceptually and often computationally more complex than abstract argumentation. The focus of this work is on assumption-based argumentation (ABA), a major structured formalism. Specifically we address the relative lack of efficient computational tools for reasoning in ABA compared to abstract argumentation. The computational efficiency of ABA reasoning systems has been markedly lower than the systems for abstract argumentation. In this thesis we introduce a declarative approach to reasoning in ABA via answer set programming (ASP), drawing inspiration from existing tools for abstract argumentation. In addition, we consider ABA+, a generalization of ABA that incorporates preferences into the formalism. The complexity of reasoning in ABA+ is higher than in ABA for most problems. We are able to extend our declarative approach to some ABA+ reasoning problems. We show empirically that our approach vastly outperforms previous reasoning systems for ABA and ABA+.
  • Avela, Henri (2019)
    Lipidomics is a quickly growing trend in metabolomics research: not only seen as passive cell membrane building blocks, lipids contribute actively to cell signaling and identification, thus seen as potential biomarkers (e.g. for early stage cancer diagnostics). The literature part includes a review of 63 articles on UHPLC/MS-methods in the time frame of 2017-05/2019. The following literature is focused especially on glycerophospholipids (GPs). In addition, an overview to basic glycerolipids (GLs) and sphingolipids (SPs) is established, which evidently affects the emphasis and narration of lipid class representations in this review. Chromatographic methods in lipidomics are used to achieve either very selective or all-encompassing analyses for lipid classes. Since HPLC/MS is an insufficient method for fully encompassing low-abundance lipids, UHPLC/MS was mostly used for metabolic profiling where its large analyte range due to high sensitivity, separation efficiency and resolution excels in performance compared to other methods. Imaging techniques have further diverted towards DIMS and other novel non-chromatographic methods, e.g. Raman techniques with single cell resolution. The field of mass-spectral lipidomics is divided between studies using isotope-labeled standards or fully standardless algorithm-based analyses, furthermore, machine learning and statistical analysis has increased. The experimental part focused on LC-IMS-MS and plasma-based in-house database method development for targeted analysis of ascites. Method development included optimization of the chromatography, adduct species selection and data-independent/-dependent fragmentation. Totally, 130 potential species from the LIPID MAPS database were used for the identification at the minimum score of 79% for identification in the Qualitative Workflows with retention times (RTs) and Mass Profiler-program with collision cross-sections (CCSs). Plasma sample analyses resulted in the documentation of 70 RTs and 36 CCS values. Two lipid extraction methods (Folch and BUME) with pre-sampling surrogates and post-sampling internal standards were compared with each other. The process resulted in confirming the BUME method in lipidomics to be superior in ecology-, workload-, health- and extraction-related properties. The lipidome of ascites has rarely been studied due to its availability only in diseased patients. Also, limiting factors for these studies are the logistics to realise such a representative analysis.
  • Rönkkö, Tuukka (2016)
    The literature part of this thesis consists of a review of recently introduced forms of solid phase microextraction (SPME): thin film microextraction (TFME), in-tube solid phase microextraction(IT-SPME) and the closely related techniques of capillary in tube adsorption trap/solid phase dynamic extraction (INCAT/SPDE). The experimental part covers the study of reagents for on-fiber derivatization of low molecular weight aliphatic amines in atmospheric concentrations. In TFME a thin film of sorbent is used for extraction instead of a rod-like sorbent as in fiber-SPME. This increases analyte uptake and capacity compared to fiber-SPME, making TFME suitable for non-equilibrium extraction. TFME is used with both gas and liquid chromatography, although the large size of the film presents problems in desorption, especially in gas chromatography. Common applications of TFME are environmental monitoring and in vivo extraction. IT-SPME is a dynamic type of SPME most often coupled with liquid chromatography, in which a liquid sample is pumped through an extraction capillary. It is relatively easily automated with most autosamplers. In the most common form a sorbent is coated on the inside walls of the capillary. Recently, packed types of IT-SPME have been introduced, which can achieve very high extraction efficiencies. In addition, sorbent materials which change their properties according to environmental factors such as temperature, potential and magnetic field seem promising for future development. INCAT/SPDE utilizes internally coated metal needles for extraction. Although similar to IT-SPME, it is used for sampling gaseous compounds by pumping them through the needle. Desorption and analysis is usually performed with a gas chromatograph. INCAT/SPDE has some advantages over fiber-SPME, such as larger sorbent volume and robustness. However, it is currently limited to only polydimethylsiloxane-based sorbents, which limits possible applications. In the experimental part, the possibilities of using allyl isothiocyanate, pentafluorobenzaldehyde(PFBAY) and pentafluorobenzyl chloroformate (PFBCF) in simultaneous extraction and on-fiber derivatization of low molecular weight aliphatic amines were explored. Separation and analysis was performed with gas chromatography-mass spectrometry. Allyl isothiocyanate did not derivatize the analytes. On-fiber derivatization with PFBAY was successful for both ethylamine and methylamine, but the concentrations required to observe signal from the derivatives were too high to use PFBAY for air samples. PFBCF was identified as the most promising reagent, working for both dimethylamine and ethylamine. It was also possible to construct a calibration function for gaseous dimethylamine.
  • Wong, Davin (Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2007)
    We investigate methods for recommending multimedia items suitable for an online multimedia sharing community and introduce a novel algorithm called UserRank for ranking multimedia items based on link analysis. We also take the initiative of applying EigenRumor from the domain of blogosphere to multimedia. Furthermore, we present a strategy for making personalized recommendation that combines UserRank with collaborative filtering. We evaluate our method with an informal user study and show that results obtained are promising.
  • Andrews, Eric (2015)
    Users of large online dating sites are confronted with vast numbers of candidates to browse through and communicate with. To help them in their endeavor and to cope with information overload, recommender systems can be utilized. This thesis introduces reciprocal recommender systems that are aimed towards the domain of online dating. An overview of previously developed methods is presented, and five methods are described in detail, one of which is a novel method developed in this thesis. The five methods are evaluated and compared on a historical data set collected from an online dating website operating in Finland. Additionally, factors influencing the design of online dating recommenders are described, and support for these characteristics are derived from our historical data set and previous research on other data sets. The empirical comparison of the five methods on different recommendation quality criteria shows that no method is overwhelmingly better than the others and that a trade-off need be taken when choosing one for a live system. However, making that trade-off decision is something that warrants future research, as it is not clear how different criteria affect user experience and likelihood of finding a partner in a live online dating context.
  • Hällsten, Susanna (2021)
    Chiral assemblies of metal nanoparticles absorb and/or scatter left and right handed circularly polarized light with different intensities usually from the visible light spectral region. This difference in the absorption called circular dichroism (CD) and closely related anisotropy factor (g-factor), which is the CD spectrum normalized with the overall absorption, describe the optical activity of the chiral assemblies. The aim of this thesis was to study and optimize the different structural parameters affecting the g-factor of a chiral gold nanorod (AuNR) dimer to reach the highest possible value. The structure consisted of two AuNRs bound together with a DNA origami in a crossed fingers conformation. The properties studied were silver as a coating material of the AuNRs, the dimensions of the AuNRs, angle between the long axes of the AuNRs and the interparticle distance. The dimensions comparison was studied with different sized AuNRs, the angle was controlled by changing the DNA strands working as a bridge between the two bundles in the DNA origami and the distance between the AuNRs was controlled by the length of the thiol treated DNA strands used for the AuNR binding to the origami. The experiments showed that the best g-factor was achieved with 33×74 nm sized AuNRs with an angle of approximately 55° and an interparticle distance of 24nm. Optimized assembly made a notable increase in the g-factor from 0.05 to 0.12. This is a highest g-factor recorded in a AuNR dimer structure up to date and thus the assembly could be of great use in the chiral sensing field in the future.
  • Enwald, Joel (2020)
    Mammography is used as an early detection system for breast cancer, which is one of the most common types of cancer, regardless of one’s sex. Mammography uses specialised X-ray machines to look into the breast tissue for possible tumours. Due to the machine’s set-up as well as to reduce the radiation patients are exposed to, the number of X-ray measurements collected is very restricted. Reconstructing the tissue from this limited information is referred to as limited angle tomography. This is a complex mathematical problem and ordinarily leads to poor reconstruction results. The aim of this work is to investigate how well a neural network whose structure utilizes pre-existing models and known geometry of the problem performs at this task. In this preliminary work, we demonstrate the results on simulated two-dimensional phantoms and discuss the extension of the results to 3-dimensional patient data.
  • Fatemeh, Ajallooeian (2018)
    Pollen samples from Lake Lavijärvi (sediment core LAV16-05) located in western Karelian Russia were examined. 21 pollen and spore types were identified in the process to reconstruct the past ~3000 years vegetation cover and consequently understand major climate pattern of the area. The pollen diagram was divided into 4 zones determined by the main vegetation changes: Zone A (2700 to 1400 cal BP or 750 BC to 550 AD) representing a consistent arboreal forest; Zone B (1400 to 650 cal BP or 550 to 1300 AD) demonstrating a transition from forest to forest-steppe vegetation; Zone C (650 to 10 cal BP or 1300 to 1940 AD) illustrating fluctuations of vegetation patterns; and finally, Zone D (10 to -66 BP or 1940 to 2016 AD) showing the recent post-war relaxation of land-use. Pinus, Picea, Betula, Alnus, Chenopodiaceae and Poaceae are among the major pollen types. Throughout the core changes in vegetation patterns and slash and burn cultivation are well represented. The Medieval Warm Period and the Little Ice Age are also moderately present in the pollen frequency and variety. The anthropogenic effects of farming are displayed by large abundances of Poaceae and Cerealia pollen especially in Zone C, eutrophication of the lake and the absence of Picea pollen due to fires. Today, the lake’s surrounding is mainly pasture with arable farming taking place moderately. The climate of Lavijärvi appeared to have had long winters with excessive snow cover especially in the early stages (2600 to 1000 cal BP or 650 BC to 950 AD) and a moderately dry temperature due to Chenopodiaceae growth though maintaining enough soil moisture for cultivated plants. Other geochemical indicators such as TIC, TN and C/N of core LAV 16-05 were measured. The geochemical findings represent a silt loam sediment profile for the core along with an organic rather than inorganic carbon available together with steady yet low levels of TN and TS. Lake Lavijärvi is a good example of shifting from dense arboreal forest to steppe-like vegetation and finally pasture throughout a window of 3000 years and can reveal useful information on the land-use history of the area.
  • Pudas, Topi (2024)
    This thesis contributes to the ongoing development of a novel, environmentally friendly e-waste recycling technology. We utilize high-intensity focused ultrasound to locally extract gold from the surface of printed circuit boards via cavitation erosion. Acoustic cavitation erosion is the phenomenon in which the acoustically driven violent collapse of gas bubbles in liquid cause damage to nearby solids. Bubble collapse is preceded by its dramatic growth, which is driven by the rarefactive phase of the acoustic wave. In this work, I investigate the effect of ultrasound frequency on the efficiency of gold extraction. Gold extraction experiments were conducted with three custom-built transducers, with different resonant frequencies [4.2, 7.3, 11.8] MHz. The geometries of the transducers are identical, as were the electrical driving parameters. With each transducer, a sequence of gold extraction experiments was conducted with an increasing number of acoustic bursts (ranging from 100k to 1.9M). The results demonstrate that the lowest frequency (4.2 MHz) is 3.8 and 4.5 times more efficient at extracting gold compared to [7.3, 11.8] MHz, respectively. This dramatic improvement is likely due to larger cavitation bubbles associated with lower frequencies. Larger bubbles in the cavitating zone would be expected to undergo more bubble coalescence due to a higher gas volume ratio. Since the energy of bubble collapse increases with bubble size, increased bubble coalescence should augment the energy of bubble collapse. These results provide valuable insights relating to cavitation research and will guide the ongoing development of our novel e-waste recycling technology.
  • Ulmala, Minna (Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2012)
    eBusiness collaboration and an eBusiness process are introduced as a context of a long running eBusiness transaction. The nature of the eBusiness collaboration sets requirements for the long running transactions. The ACID properties of the classical database transaction must be relaxed for the eBusiness transaction. Many techniques have been developed to take care of the execution of the long running business transactions such as the classical Saga and a business transaction model (BTM) of the business transaction framework. Those classic techniques cannot adequately take into account the recovery needs of the long running eBusiness transactions and they need to be further improved and developed. The expectations for a new service composition and recovery model are defined and described. The DeltaGrid service composition and recovery model (DGM) and the Constraint rules-based recovery mechanism (CM) are introduced as examples of the new model. The classic models and the new models are compared to each other and it is analysed how the models answer to the expectations. Neither new model uses the unaccustomed classification of atomicity even if the BTM includes the unaccustomed classifying of atomicity. A recovery model of the new models has improved the ability to take into account the data and control dependencies in the backward recovery. The new models present two different kinds of strategies to recover a failed service. The strategy of the CM increases the flexibility and the efficiency compared to the Saga or the BTF. The DGM defines characteristics that the CM does not have: a Delta-Enabled rollback, mechanisms for a pre-commit recoverability and for a post-commit recoverability and extends the concepts of a shallow compensation and a deep compensation. The use of them guarantees that an eBusiness process recovers always in a consistent state which is something the Saga, the BTM and the CM could not proof. The DGM offers also the algorithms of the important mechanisms. ACM Computing Classification System (CCS): C.2.4 [Distributed Systems]: Distributed applications
  • Sysikaski, Mikko (2019)
    The thesis discusses algorithms for the minimum link path problem, which is a well known geometric path finding problem. The goal is to find a path that does the minimum number of turns amidst obstacles in a continuous space. We focus on the most classical variant, the rectilinear minimum link path problem, where the path and the obstacles are restricted to the directions of the coordinate axes. We study the rectilinear minimum link path problem in the plane and in the three-dimensional space, as well as in higher dimensional domains. We present several new algorithms for solving the problem in domains of varying dimension. For the planar case we develop a simple method that has the optimal O(n log n) time complexity. For three-dimensional domains we present a new algorithm with running time O(n^2 log^2 n), which is an improvement over the best previously known result O(n^2.5 log n). The algorithm can also be generalized to higher dimensions, leading to an O(n^(D-1) log^(D-1) n) time algorithm in D-dimensional domains. We describe the new algorithms as well as the data structures used. The algorithms work by maintaining a reachable region that is gradually expanded to form a shortest path map from the starting point. The algorithms rely on several efficient data structures: the reachable region is tracked by using a simple recursive space decomposition, and the region is expanded by a sweep plane method that uses a multidimensional segment tree.
  • Mahó, Sándor István (2021)
    This thesis analyses the alterations of vertically integrated atmospheric meridional energy transport due to polar amplification on an aqua planet. We analyse the energy transport of sensible heat, latent energy, potential energy and kinetic energy. We also cover the energy flux of the mean meridional circulation, transient eddies and stationary eddies. In addition, we also address the response of the zonal mean air temperature, zonal mean zonal wind, zonal mean meridional wind, zonal mean stream function and zonal mean specific humidity. Numerical model experiments were carried out with OpenIFS in its aqua planet configuration. A control (CTRL) and a polar amplification (PA) simulation was set up forced by different SST (sea surface temperature) patterns. We detected tropospheric warming and atmospheric specific humidity increase 15-90° N/S and reduction of the meridional temperature gradient throughout the troposphere. We also found reduced strength of the subtropical jet stream and slowdown of the mean meridional circulation. Important changes were identified in the Hadley cell: the rising branch shifted poleward and caused reduced lifting in equatorial areas. Regarding the total atmospheric vertically integrated meridional energy transport, we found reduction in case of the mean meridional circulation and transient eddies in all latitudes. The largest reduction was shown by the Hadley cell transport (-15%) and by midlatitude transient eddy flux (-23%). Unlike most studies, we did not observe that meridional latent energy transport increases by polar amplification. Therefore, it is stated that the increased moisture content of the atmosphere does not imply increased meridional latent energy transport, and hence there is no compensation for the decrease of meridional dry static energy transport. Lastly, we did not detect stationary eddies in our simulations which is caused by the simplified surface boundary (i.e. the water-covered Earth surface). The main finding of this thesis is that polar amplification causes decreasing poleward energy transport on an aqua planet.
  • Valto, Kristian (2023)
    Microservices have been a popular architectural style to build server-side applications for quite a while. It has gained popularity for its inherent properties that countered the downsides of matured monoliths that are harder to maintain and further develop the larger the monoliths get. A monolithic application consists of a single unit. It usually is split into application tiers such as client, database, and server-side applications. The properties countering monoliths come from splitting a service into smaller services. These smaller services then form the server-side application by communicating with each other. The goal of a single microservice is to focus on "doing one thing well" and only that. Together they form a loosely coupled group of services to achieve larger business goals. However, the fact is that distributed systems are complex. With software architecture we can separate the complexity of distributed systems and business functions.