Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Oikarinen, Joona (2017)
    In this thesis we construct the probabilistic Liouville field theory on the two-dimensional sphere. We prove some of the symmetry properties of the theory and define the correlation functions of the vertex operators. Finally, we define the Liouville quantum gravity measure. The thesis also contains a discussion on how the theory is related to quantum field theory and scaling limits of random planar maps. Essential building block of the theory is the Gaussian free field, which can be thought of as a random Gaussian field with the covariance operator given by the inverse of the Laplacian. Another important aspect of the Liouville field theory is the exponential of the Gaussian free field. Defining this requires some work, since the Gaussian free field will turn out to be a random generalized function, and the exponential of such an object is not defined in general. We will define the exponential by using the theory of Gaussian multiplicative chaos. The thesis contains a self-contained exposition of the definitions and basic properties of the Gaussian free field and its exponential. Some basic background in analysis, probability and geometry is assumed.
  • Silva, Oscar S. (2020)
    Asymmetrical flow field-flow fractionation (AF4) is a separation and characterization technique for macromolecules and particles, which has been gaining popularity in a multitude of scientific and industrial applications. AF4 is considered a challenging experimental technique to optimize and relatively few tools exist for this purpose. One of the main aims of the work was to provide practitioners of AF4 techniques with software tools, which bridge the gap between domain knowledge and AF4 theory for a more fluid experimental design workflow. This is made possible by a feature of AF4, which makes it stand out from related separation methods by enabling theory-driven prediction of sample behavior over the course of the experiment. In the first part of the computer experiments carried out, an algorithm based on probability theory was developed for predicting the ideal separation of samples based on readily obtainable sample properties. Among the obtained results is a predicted fractogram, which is the end product of an AF4 experiment run. The ability to predict separation of samples finds use in AF4 method development as well as other applications relevant to experimental work. The algorithmic models were constructed to describe real life systems for which experimental data was available and against which performance could be tested. The real world systems modeled included two AF4 instrument channels with different geometries and both natural and synthetic polymer samples. Prediction by the algorithm was compared to previously published experimental data from other authors, after configuring the algorithm to the corresponding experimental setups. The results suggest that the algorithm can relatively closely approximate predictions made by the underlying ideal AF4 theory. For a disperse polymer sample in a separation program for which no simple theoretical result was available, the algorithm's predictions gave promising results for approximating the shape of fractogram curve. In the second part of the computer experiments, a theory based model was fitted to experimental data and performing statistical inference was explored as a technique. Bayesian data analysis was used therein to complete a routine task in AF4 operation and subsequent data processing. The data analysis provided an estimate congruent with theory and external estimates given for the same data by other researchers. Looking forward, possible enhancements to the presented models and their applicability more widely to AF4 work as well as possible developments of computational models in the field are discussed.
  • Agiashvili, Georgi (2021)
    Unlike the traditional machine learning approaches that rely solely on data, Bayesian machine learning models can utilize prior knowledge on the data generating process, for instance in form of information about plausible outcomes. More importantly, Bayesian machine learning models use the prior information as the base knowledge, on top of which the learning from observations is built on. The process of forming the prior distribution based on subjective probabilities is called prior elicitation, and that is the focus of this thesis. Although previous research has produced methods for prior elicitation, there has not been a general-purpose solution. Particularly, the methods introduced previously have focused on specific models. This has limited the applicability of prior elicitation, and in some cases, required the expert to have a deep understanding of different aspects of the Bayesian modelling. Additionally, the more general predictive elicitation methods in previous research have not accounted for the uncertainty regarding experts' judgements. This is important, since even the most accurate elicitation methods cannot remove all imprecision in expert judgements. Because of these reasons, prior elicitation has remained somewhat underrated and underused in the modern Bayesian workflow. This thesis provides a theoretical basis and validation of a novel prior elicitation method, which was first introduced by Hartmann et al. Particularly, this principled statistical framework called probabilistic predictive elicitation 1) makes prior elicitation independent on the specific structure of the probabilistic model, 2) handles complex models with many parameters and potentially multivariate priors, 3) fully accounts for uncertainty in experts' probabilistic judgements on the data, and 4) provides a formal quality measure indicating if the chosen predictive model is able to reproduce experts' probabilistic judgements. We extend the published work in multiple ways. First, we provide more thorough literature reviews on different prior elicitation approaches as well as on methods for the expert elicitation. Second, we continue the discussion about technicalities, implementation and applications of the proposed methodology. Third, we report two unpublished experiments using the proposed methodology. In addition, we discuss the methodology in the context of the modern Bayesian workflow.
  • Panchamukhi, Sandeep (2019)
    Time series analysis has been a popular research topic in the last few decades. In this thesis, we develop time series models to investigate short time series of count data. We first begin with Poisson autoregressive model and extend it to capture day effects explicitly. Then we propose hierarchical Poisson tensor factorization model as an alternative to the traditional count time series models. Furthermore, we suggest acontext-based model as an improvement over hierarchical Poisson tensor factorization model. We implement the models in an open-source probabilistic programming framework Edward. This tool enables us to express the models in form of executable program code and allows us to rapidly prototype models without the need of derivation of model specificupdaterules. We also explore strategies for selecting the best model out of alternatives. We study the proposed models on a dataset containing media consumption data. Our experimental findings demonstrate that the hierarchical Poisson tensor factorization model significantly outperforms the Poisson autoregressive models in predicting event counts. We also visualize the key results of our exploratory data analysis.
  • Panchamukhi, Sandeep (2018)
    Time series analysis has been a popular research topic in the last few decades. In this thesis, we develop time series models to investigate short time series of count data. We first begin with Poisson autoregressive model and extend it to capture day effects explicitly. Then we propose hierarchical Poisson tensor factorization model as an alternative to the traditional count time series models. Furthermore, we suggest a context-based model as an improvement over hierarchical Poisson tensor factorization model. We implement the models in an open-source probabilistic programming framework Edward. This tool enables us to express the models in form of executable program code and allows us to rapidly prototype models without the need of derivation of model specific update rules. We also explore strategies for selecting the best model out of alternatives. We study the proposed models on a dataset containing media consumption data. Our experimental findings demonstrate that the hierarchical Poisson tensor factorization model significantly outperforms the Poisson autoregressive models in predicting event counts. We also visualize the key results of our exploratory data analysis.
  • Mäklin, Tommi (2017)
    DNA sequencing has seen a rapid decrease in price during the last decade. As a result, routine sequencing of bacterial colonies in both clinical and environmental sources is becoming increasingly available. However, accurate identification of the bacterial strains colonizing a sample remains difficult especially in the presence of multiple organisms. Traditional methods based on culturing the bacteria are laborous and ineffective, while methods based on sequencing data have trouble differentiating between closely related variants of the species. Accurate identification of the species or strains contained in a sample would be desirable both in metagenomic studies and in improving the quality of hospital care. The aim of this thesis was to develop a computational method for accurate bacterial strain identification. Based on recent advancements in sequencing read alignment and application of Bayesian inference to bacterial strain identification, the thesis introduces a pipeline capable of rapid and accurate strain identification from high-throughput sequencing data. By representing the within-species variation with multiple reference genomes that have been clustered, the pipeline is able to accurately determine the cluster proportions in a sample from pseudoalignment of reads to the reference genomes. The proportions are estimated using a variational Bayesian method. Accuracy of the method is evaluated on both real and synthetic data containing reads originating from Staphylococcus aureus, Staphylococcus epidermidis, Klebsiella pneumoniae, Campylobacter jejuni and Campylobacter coli. In all cases the cluster proportions are accurately identified and performance is significantly better than that of existing methods.
  • Takko, Heli (2021)
    Quantum entanglement is one of the biggest mysteries in physics. In gauge field theories, the amount of entanglement can be measured with certain quantities. For an entangled system, there are correlations with these measured quantities in both time and spatial coordinates that do not fit into the understanding we currently hold about the locality of the measures and correlations. Difficulties in obtaining probes for entanglement in gauge theories arise from the problem of nonlocality. It can be stated as the problem of decomposing the space of the physical states into different regions. In this thesis, we focus on a particular supersymmetric Yang-Mills theory that is holographically dual to a classical gravity theory in an asymptotically anti de Sitter spacetime. We introduce the most important holographic probes of entanglement and discuss the inequalities obtained from the dual formulation of the entanglement entropy. We introduce the subregion duality as an interesting conjecture of holography that remains under research. The understanding of the subregion duality is not necessarily solid in arbitrary geometries, as new results that suggest either a violation of the subregion duality or act against our common knowledge of the holography by reconstructing the bulk metric beyond the entanglement wedge. This thesis will investigate this aspect of subregion duality by evaluating the bulk probes such as Wilson loop for two different geometries (deconfining and confining). We aim to find whether or not these probes remain inside of the entanglement wedge. We find that, for both geometries in four dimensions, the subregion duality is not violated. In other words, the reduced CFT state does not encode information about the bulk beyond the entanglement wedge. However, we can not assume this is the case with arbitrary geometries and therefore, this topic will remain under our interest for future research.
  • Kivelä, Feliks (2022)
    The crystal structure of magnetite (Fe3O4) involves Fe2+ ions in sites with octahedral (Oh) symmetry and Fe2+ and Fe3+ ions in sites with tetrahedral (Td) symmetry. Magnetite exhibits several interesting physical phenomena, such as the Verwey transition, in which the roles of the different Fe sites are an active subject of research. In the X-ray standing wave (XSW) technique, incoming and diffracted X-ray beams interfere inside a crystal, creating a standing wave with the periodicity of the diffracting atomic lattice. The phase of the wave, i.e. whether the nodes are located on the lattice planes or between them, can be adjusted by finely tuning the diffraction angle. Changing the phase in this way makes it possible to selectively vary the contributions of different atoms and absorption types (dipole versus quadrupole) to the measured total absorption spectrum. Iron K-edge absorption spectra in magnetite were studied in the presence of an XSW in an experiment conducted at the European Synchrotron Radiation Facility (ESRF) in Grenoble, France. This thesis presents an analysis of the data gathered during the experiment, with the goal of decomposing the experimentally measured pre-edge peak into its constituent components. The methods used in the analysis include principal component analysis and fitting predicted absorption peaks calculated with the Quanty software to the experimental data. The results show the dipole and quadrupole contributions of the tetrahedral sites responding to changes in the phase of the XSW in opposite ways in a manner consistent with theoretical predictions.
  • Walia, Parampreet Singh (2013)
    We study the possible initial conditions of the universe and the possibility of isocurvature perturbations in the early universe through CMB data. We consider three isocurvature modes; Cold Dark Matter Density Isocurvature (CDI) mode, Neutrino Density Isocurvature (NDI) mode and Neutrino Velocity Isocurvature (NVI) mode. We use three CMB datasets WMAP, QUaD and ACBAR data to constrain the (possibly) correlated adiabatic and isocurvature models. For CDI and NDI models we use both a phenomenological approach, where primordial perturbations are parametrized in terms of amplitudes at two different scales, and a slow-roll two-field inflation approach. For the NVI model we only use the phenomenological approach, since NVI mode would occur only after neutrino decoupling, i.e., after inflation. We find that larger isocurvature fractions are allowed in NDI and NVI models than in corresponding CDI models. For generally correlated perturbations, we find the upper limit to the CDM density, neutrino density and neutrino velocity isocurvature fraction to be 4.5%, 9.8% and 12.4% respectively at k = 0.002 Mpc−1 . Analysis has also been done for the special cases of uncorrelated and fully (anti) correlated perturbations. We find no clear preference for non-zero isocurvature fraction for the models considered. We find that the odds for a correlated isocurvature model compared to the standard adiabatic model are very low. We conclude that the present data does support the standard adiabatic model.
  • Adio, Luqmon (2019)
    Particle Induced X-ray Emission (PIXE) was originally introduced as an ion-beam analytical technique in Lund in the 1970s and has since then been part of the available techniques in many laboratories around the world. The external beam PIXE set-up is used in probing the annual tree rings. The goal is to see the effects of volcanic eruption activities from the perspectives of tree plants here in Finland. In the theory part, I tried to include the description of how volcanoes are formed and created with a bit of volcanic activity history, the growth metabolism mechanism in tree plants and characteristics x-ray productions. The two tree sample used for this experiment were gotten from two different regions of Finland. The first tree is a Pine tree from Parikkala(a small place near Savolinna) in the south-eastern part of Finland and the second tree is a Spruce tree from Pielavesi (place near Kuopio) in the central part of Finland. These samples were carefully prepared for ionisation. The collected spectra data were analysed in a software called PyMCA. PyMCA has been developed by the Software Group of the European Synchrotron Radiation Facility (ESRF). PyMCA is a ready to use and in many aspects state-of-the-art, set of applications implementing most of the needs of X-ray fluorescence data analysis. PyMCA is use to interpret X-ray fluorescence spectra from a diverse array of samples
  • Olander, Amanda (2022)
    Enligt såväl läroplanen för grundskolan som gymnasiet hör problemlösning till en av förmågorna som ska läras ut (Läroplanen, 2014, 2019). Ju mera studeranden själva får pröva, göra och förstå vid problemlösning desto mera givande blir processen. Motivationen för matematik ökar (Lambdin, 2003) och lärandet blir långsiktigt (Läroplanen, 2019). Detta lade grunden till denna avhandling. I avhandlingen har jag använt mig av Pólyas problemlösningsmodell från år 1973 för att ge en inblick i problemlösning i praktiken. Modellen består av fyra steg: Förstå problemet, göra upp en plan, genomförandet av planen samt reflektering över lösningen. Avhandlingens matematiska del behandlar fyra delområden i sannolikhetslära i gymnasiet. Klassisk sannolikhet, kombinatorik, statistisk sannolikhet och betingad sannolikhet behandlas med exempel, tabeller och figurer. I slutet av detta kapitel behandlas sannolikhetslärans icke-intuitiva karaktär och vanliga missuppfattningar i sannolikhetslära tas upp på basen av tidigare forskning och teori. På basen av sannolikhetslärans karaktär och missuppfattningar presenteras möjligheter att motverka dessa och underlätta undervisningen i sannolikhetslära med hjälp av problemlösning och kommentarer i följande kapitel. I avhandlingens sista kapitel presenteras fyra problem i sannolikhetslära, ett problem för varje delområde i sannolikhetslära. Problemen diskuteras med hjälp av Pólyas problemlösningsmodell och modellösningar med figurer och tabeller presenteras för varje problem.
  • Soininvaara, Katri (2017)
    In condition-based maintenance data is collected from a machine to provide advice on frequency and location of developing faults. Statistical inference is needed to transform the data into information on the health of the machine. The ultimate goal is to minimise the machine down-time due to unexpected breakage. Predictive maintenance attempts to forecast the condition of the machine components from the observed data, and to maintain the machine just before it breaks down. The research question this thesis aims to solve is how to diagnose and predict component health based on data collected from the machine. Based on the literature, hidden Markov model is selected for further study. There is usually uncertainty relating to the parameters and structure of the model due to the complicated causal relationships in the modelling problem. Therefore the thesis concentrates in finding a suitable inference algorithm which is able to learn the model from data. Six different frequentist and Bayesian algorithms are tested with a synthetic example. A hypothesis is put forward that a hybrid genetic variational Bayesian algorithm could be used to find the best performing hidden Markov model of component health. As expected, the hybrid variational algorithm performs better than the other examined algorithms, especially when there is uncertainty relating to the model structure. However, since there typically is an imbalance between the data depicting faults and the data depicting the normal behaviour, the simulated test case shows that even the best performing variational algorithm has difficulties in identifying the correct model. This results in increased uncertainty in the health predictions. The thesis confirms that the hidden Markov model has many good qualities for modelling component health based on remote monitoring data. Due to the versatility of the model, it can be modified to account for the many details of component degradation behaviour in different machines.
  • Lampuoti, Jarkko (2021)
    Scandium-44 is a medically interesting positron and gamma emitting radionuclide with possible applications in molecular imaging. It is commonly produced with the use of a cyclotron in a calcium or sometimes a titanium based irradiation target. As the radiopharmaceutical use of scandium radionuclides commonly requires chelation, scandium needs to be separated from the target matrix. This is most often carried out either via extraction chromatography using a suitable solid phase or through precipitation-filtration. In this work, scandium-44 along with other scandium radionuclides was produced using cyclotron irradiation with 10 MeV protons and a solid, natural isotopic abundance calcium carbonate or calcium metal target. Scandium was separated from the irradiated targets using four different chromatographic materials and a precipitation method. Scandium-44 was produced in kilo- and megabecquerel amounts with an average saturation yield of 47 MBq/μA. The achieved separation yields in a single elution ranged from 28 ± 11 % to 70 ± 20 % with the best performing extraction material being UTEVA resin.
  • Hämäläinen, Jussi (2024)
    In this thesis, we aim to introduce the reader to profinite groups. Profinite groups are defined by two characteristics: firstly, they have a topology defined on them (notably, they are compact). Secondly, they are constructed from some collection of finite groups, each equipped with a discrete topology and forming what is known as an inverse system. The profinite group emerges as an inverse limit of its constituent groups. This definition is, at this point, necessarily quite abstract. Thus, before we can really understand profinite groups we must examine two areas: first, we will study topological groups. This will give us the means to deal with groups as topological spaces. Topological groups have some characteristics that differentiate them from general topological spaces: in particular, a topological group is always a homogeneous space. Secondly, we will explore inverse systems and inverse limits, which will take us into category theory. While we could explain these concepts without categories, this thesis takes the view that category theory gives us a useful “50000-feet view” by giving these ideas a wider mathematical context. In the second chapter, we will go through preliminary information concerning group theory, general topology and category theory that will be needed later. We will begin with some basic concepts from group theory and point-set topology. These sections will mostly contain information that is familiar from the introductory university courses. The chapter will then continue by introducing some basic concepts of category theory, including inverse systems and inverse limits. For these, we will give an application by showing how the Cantor set is homeomorphic to an inverse limit of a collection of finite sets. In the third chapter, we will examine topological groups and prove some of their properties. In the fourth chapter, we will introduce an example of profinite groups: Zp, the additive group of p-adic integers. This will be expanded into a ring and then into the field Qp. We will discuss the uses of Zp and Qp and show how to derive them as an inverse limit of finite, compact groups.
  • Speer, Jon (2020)
    The techniques used to program quantum computers are somewhat crude. As quantum computing progresses and becomes mainstream, a more efficient method of programming these devices would be beneficial. We propose a method that applies today’s programming techniques to quantum computing, with program equivalence checking used to discern between code suited for execution on a conventional computer and a quantum computer. This process involves determining a quantum algorithm’s implementation using a programming language. This so-called benchmark implementation can be checked against code written by a programmer, with semantic equivalence between the two implying the programmer’s code should be executed on a quantum computer instead of a conventional computer. Using a novel compiler optimization verification tool named CORK, we test for semantic equivalence between a portion of Shor’s algorithm (representing the benchmark implementation) and various modified versions of this code (representing the arbitrary code written by a programmer). Some of the modified versions are intended to be semantically equivalent to the benchmark while others semantically inequivalent. Our testing shows that CORK is able to correctly determine semantic equivalence or semantic inequivalence in a majority of cases.
  • Siipola, Sade-Tuuli (2023)
    Data centers provide a demanding and complex environment for networking as there is a need to provide fairness, throughput, and responsiveness while balancing great volumes of data and different types of flows. Programmable scheduling aims to make networking more flexible by providing capabilities for testing, modifying, and running a greater number of scheduling algorithms on switches than currently is possible. This is done by having a hardware design on top of which scheduling algorithms can be run as software. Over the years, multiple different abstractions for the switch scheduler have been suggested, with the aim of being capable of running at line rate. This thesis is a literature review of different programmable scheduler designs, focusing on Push-In First-Out, Push-In Extract-Out, Strict Priority Push-In First-Out, and Admission-In First-Out designs. This work provides an overview of the designs and their hardware implementations, observing their strengths and weaknesses regarding the data center environment. These designs are compared to one another with a focus on trade-offs between metrics like speed, expressiveness, and scalability, with a discussion on how these trade-offs ensure that there is currently no design that is above the others in all aspects.
  • Puro, Touko (2023)
    GPUs have become an important part of large-scale and high-performance physics simulations, due to their superior performance [11] and energy effiency [23] over CPUs. This thesis examines how to accelerate an existing CPU stencil code, that is originally parallelized through message passing, with GPUs. Our first research question is how to utilize the CPU cores alongside GPUs when the bulk of the computation is happening on GPUs. Secondly, we investigate how to address the performance bottleneck of data movement between CPU and GPU when there is a need to perform computational tasks originally intended to be executed on CPUs. Lastly, we investigate how the performance bottleneck of communication between processes can be alleviated to make better use of the available compute resources. In this thesis we approach these problems by building a preprocessor designed for making an existing CPU codebase suitable for GPU acceleration and the communication bottleneck is alleviated through extending a existing GPU oriented library Astaroth. We improve its task scheduling system and extend its own domain specific language (DSL) for stencil computations. Our solutions are demonstrated by making an existing CPU based astrophysics simulation code Pencil Code [4] suitable for GPU acceleration with the use of our preprocessor and the Astaroth library. Our results show that we are able to utilize CPU cores to perform useful work alongside the GPUs. We also show that we are able to circumvent the CPU-GPU data movement bottleneck by making code suitable for offloading through OpenMP offloading and code translation to GPU code. Lastly, we show that in certain cases Astaroth’s communication performance is increased by around 21% through smaller message sizes — with the added benefit of 14% lower memory usage, which corresponds to around 18% improvement in overall performance. Furthermore, we show benefits of the improved tasking and a identified memoryperformance trade-off.
  • Olander, Tom (2020)
    Denna avhandling ställer frågor kring vad det innebär för den långa matematiken i gymnasiet när programmering tillkommer enligt läroplanen. Först granskas den tidigare forskningen kring programmering och matematik i gymnasiet och till vilka slutsatser man kommit i dessa. Att programmering kan vara till nytta för studeranden i yrkeslivet är givet, men huruvida programmering skall höra till matematiken är en av huvudfrågorna i avhandlingen. Eftersom det inte finns mycket forskning kring programmering i matematik i Finland har här även används forskning gjort i andra länder. Samma sak gäller för den aktuella åldersgruppen, därför har även forskning med studeranden i ungefär samma ålder använts. Inget undervisningsmaterial ännu finns för programmeringen i gymnasiet. Därför finns här också förslag till uppgiftstyper som kunde användas i undervisningen. Dessa exempel får fritt modifieras och användas som hjälp vid planering av undervisningen. En annan användning av denna avhandling kunde vara att låta den vara som grund för kommande planering av programmering i gymnasiet angående så väl i vilka ämnen programmering hanteras och hur denna undervisning
  • Ahlskog, Niki (2019)
    Progressiivisen web-sovelluksen (Progressive Web Application, PWA) tarkoitus on hämärtää tai jo- pa poistaa raja sovelluskaupasta ladattavan sovelluksen ja normaalin verkkosivuston välillä. PWA- sovellus on kuin mikä tahansa normaali verkkosivusto, mutta se täyttää lisäksi seuraavat mitta- puut: Sovellus skaalautuu mille tahansa laitteelle. Sovellus tarjotaan salatun yhteyden yli. Sovellus on mahdollista asentaa puhelimen kotinäytölle pikakuvakkeeksi, jolloin sovellus avautuu ilman se- laimesta tuttuja navigointityökaluja ja lisäksi sovelluksen voi myös avata ilman verkkoyhteyttä. Tässä työssä käydään läpi PWA-sovelluksen rakennustekniikoita ja määritellään milloin sovellus on PWA-sovellus. Työssä mitataan PWA-sovelluksen nopeutta Service Workerin välimuistitallen- nusominaisuuksien ollessa käytössä ja ilman. PWA-sovelluksen luomista ja käyttöönottoa tarkastel- laan olemassa olevassa yksityisessä asiakasprojektissa. Projektin tarkastelussa kiinnitetään huomio- ta PWA-sovelluksen tuomiin etuihin ja kipupisteisiin. Tuloksen arvioimiseksi otetaan Google Chromen Lighthouse -työkalua käyttäen mittaukset sovel- luksen progressiivisuudesta ja nopeudesta. Lisäksi sovellusta vasten ajetaan Puppeteer-kirjastoa hyödyntäen latausnopeuden laskeva testi useita kertoja sekä tarkastellaan PWA-sovelluksen Service Workerin välimuistin hyödyllisyyttä suorituskyvyn ja latausajan kannalta. Jotta Service Workerin välimuistin käytöstä voidaan tehdä johtopäätökset, nopeuden muutosta tarkastellaan progressii- visten ominaisuuksien ollessa käytössä ja niiden ollessa pois päältä. Lisäksi tarkastellaan Googlen tapaustutkimuksen kautta Service Workerin vaikutuksia sovelluksen nopeuteen. Testitulokset osoittavat että Service Workerin välimuistin hyödyntäminen on nopeampaa kaikissa tapauksissa. Service Workerin välimuisti on nopeampi kuin selaimen oma välimuisti. Service Worker voi myös olla pysähtynyt ja odotustilassa käyttäjän selaimessa. Silti Service Workerin aktivoimi- nen ja välimuistin käyttäminen on nopeampaa kuin selaimen välimuistista tai suoraan verkosta lataaminen.
  • Ahlskog, Niki (2019)
    Progressiivisen web-sovelluksen (Progressive Web Application, PWA) tarkoitus on hämärtää tai jopa poistaa raja sovelluskaupasta ladattavan sovelluksen ja normaalin verkkosivuston välillä. PWA-sovellus on kuin mikä tahansa normaali verkkosivusto, mutta se täyttää lisäksi seuraavat mittapuut: Sovellus skaalautuu mille tahansa laitteelle. Sovellus tarjotaan salatun yhteyden yli. Sovellus on mahdollista asentaa puhelimen kotinäytölle pikakuvakkeeksi, jolloin sovellus avautuu ilman selaimesta tuttuja navigointityökaluja ja lisäksi sovelluksen voi myös avata ilman verkkoyhteyttä. Tässä työssä käydään läpi PWA-sovelluksen rakennustekniikoita ja määritellään milloin sovelluson PWA-sovellus. Työssä mitataan PWA-sovelluksen nopeutta Service Workerin välimuistitallennusominaisuuksien ollessa käytössä ja ilman. PWA-sovelluksen luomista ja käyttöönottoa tarkastellaan olemassa olevassa yksityisessä asiakasprojektissa. Projektin tarkastelussa kiinnitetään huomiota PWA-sovelluksen tuomiin etuihin ja kipupisteisiin. Tuloksen arvioimiseksi otetaan Google Chromen Lighthouse -työkalua käyttäen mittaukset sovelluksen progressiivisuudesta ja nopeudesta. Lisäksi sovellusta vasten ajetaan Puppeteer-kirjastoa hyödyntäen latausnopeuden laskeva testi useita kertoja sekä tarkastellaan PWA-sovelluksen Service Workerin välimuistin hyödyllisyyttä suorituskyvyn ja latausajan kannalta. Jotta Service Workerin välimuistin käytöstä voidaan tehdä johtopäätökset, nopeuden muutosta tarkastellaan progressiivisten ominaisuuksien ollessa käytössä ja niiden ollessa pois päältä. Lisäksi tarkastellaan Googlen tapaustutkimuksen kautta Service Workerin vaikutuksia sovelluksen nopeuteen. Testitulokset osoittavat että Service Workerin välimuistin hyödyntäminen on nopeampaa kaikissa tapauksissa. Service Workerin välimuisti on nopeampi kuin selaimen oma välimuisti. Service Worker voi myös olla pysähtynyt ja odotustilassa käyttäjän selaimessa. Silti Service Workerin aktivoiminen ja välimuistin käyttäminen on nopeampaa kuin selaimen välimuistista tai suoraan verkosta lataaminen.