Browsing by discipline "Soveltava matematiikka"
Now showing items 21-40 of 143
-
(2018)Sharing data can lead to scientific discoveries, but it can hurt privacy of the people in the data. In this thesis we use deep generative models Generative adversarial network and Variational autoencoder to generate synthetic data, which could be shared instead of the original data. These models are also modified to satisfy the definition of Differential privacy (DP), which is a mathematically rigorous definition of privacy. First we give some essential definitions for DP and proofs for some of them. Then we discuss data sharing and potential privacy risks related to it as well as methods for mitigating these risks. Then we introduce deep generative models and their DP-versions used for creating synthetic data and finally we measure the quality of synthetic data using several continuous or categorical valued data sets.
-
(2017)Differential privacy is a mathematically defined concept of data privacy that is based on the idea that a person should not face any additional harm by opting to give their data to a data collector. Data release mechanisms that satisfy the definition are said to be differentially private and they guarantee the privacy of the data on a specified privacy level by utilising carefully designed randomness that sufficiently masks the participation of each individual in the data set. The introduced randomness decreases the accuracy of the data analysis, but this effect can be diminished by clever algorithmic design. Robust private linear regression algorithm is a differentially private mechanism originally introduced by A. Honkela, M. Das, O. Dikmen, and S. Kaski in 2016. The algorithm is based on projecting the studied data inside known bounds and applying differentially private Laplace mechanism to perturb the sufficient statistics of the Bayesian linear regression model that is then fitted to the data using the privatised statistics. In this thesis, the idea, definitions and the most important theorems and properties of differential privacy are presented and discussed. The robust private linear regression algorithm is then presented in detail, including improvements that are related to determining and handling the parameters of the mechanism and were developed during my work as a research assistant in the Probabilistic Inference and Computational Biology research group (Department of Computer Science at University of Helsinki and Helsinki Institute for Information Technology) in 2016-2017. The performance of the algorithm is evaluated experimentally on both synthetic and real-life data. The latter data are from the Genomics of Drug Sensitivity in Cancer (GDSC) project and consist of the gene expression data of 985 cancer cell lines and their responses to 265 different anti-cancer drugs. The studied algorithm is applied to the GDSC data with the goal of predicting which cancer cell lines are sensitive to each drug and which are not. The application of a differentially private mechanism to the gene expression data is justifiable because genomic data are identifying and carry highly sensitive information about e.g. an individual's phenotype, health, and risk of various diseases. The results presented in the thesis show the studied algorithm works as planned and is able to benefit from having more data: in the sense of prediction accuracy, it approaches the non-private version of the same algorithm as the size of the available data set increases. It also reaches considerably better accuracy than the three compared algorithms that are based on different differentially private mechanisms: private linear regression with no projection, output perturbed linear regression, and functional mechanism linear regression.
-
(2015)Speech is the most common form of human communication. An understanding of the speech production mechanism and the perception of speech is therefore an important topic when studying human communication. This understanding is also of great importance both in medical treatment regarding a patient's voice and in human-computer interaction via speech. In this thesis we will present a model for digital speech called the source-filter model. In this model speech is represented with two independent components, the glottal excitation signal and the vocal tract filter. The glottal excitation signal models the airflow created at the vocal folds, which works as the source for the created speech sound. The vocal tract filter describes how the airflow is filtered as it travels through the vocal tract, creating the sound radiated to the surrounding space from the lips, which we recognize as speech. We will also present two different parametrized models for the glottal excitation signal, the Rosenberg-Klatt model (RK-model) and the Liljencrants-Fant model (LF-model). The RK-model is quite simple, being parametrized with only one parameter in addition to the fundamental frequency of the signal, while the LF-model is more complex, taking in four parameters to define the shape of the signal. A transfer function for vocal tract filter is also derived from a simplified model of the vocal tract. Additionally, relevant parts of the theory of signal processing are presented before the presentation of the source-filter model. A relatively new model for glottal inverse filtering (GIF), called the Markov chain Monte Carlo method for glottal inverse filtering (MCMC-GIF) is also presented in this thesis. Glottal inverse filtering is a technique for estimating the glottal excitation signal from a recorded speech sample. It is a widely used technique for example in phoniatrics, when inspecting the condition of a patient's vocal folds. In practice the aim is to separate the measured signal into the glottal excitation signal and the vocal tract filter. The first method for solving glottal inverse filtering was proposed in the 1950s and since then many different methods have been proposed, but so far none of the methods have been able to yield robust estimates for the glottal excitation signal from recordings with a high fundamental frequency, such as women's and children's voices. Recently, using synthetic vowels, MCMC-GIF has been shown to produce better estimates for these kind of signals compared to other state of the art methods. The MCMC-GIF method requires an initial estimate for the vocal tract filter. This is obtained from the measurements with the iterative adaptive inverse filtering (IAIF) method. A synthetic vowel is then created with the RK-model and the vocal tract filter, and compared to the measurements. The MCMC method is then used to adjust the RK excitation parameter and the parameters for the vocal tract filter to minimize the error between the synthetic vowel and the measurements, and ultimately receive a new estimate for the vocal tract filter. The filter can then be used to calculate the glottal excitation signal from the measurements. We will explain this process in detail, and give numerical examples of the results of the MCMC-GIF method compared against the IAIF method.
-
(2020)This thesis considers certain mathematical formulation of the scattering phenomena. Scattering is a common physical process, where some initial wave is disturbed, producing a scattered wave. If the direct problem is to determine the scattered wave from the knowledge of the object that causes the scattering as well and the initial wave, then the inverse problem would be to determine the object from the knowledge on how different waves scatter from it. In this thesis we consider direct and inverse scattering problems governed by Helmholtz equation $\Delta u + k^2 \eta u = 0$ in $\mathbb{R}^d$ with $d = 3$. The positive function $\eta \in L^\infty(\mathbb{R}^d)$ is considered to be such that $\eta(x) = 1$ outside of some ball. In particular the function $\eta$ models the physical properties of the scattering object and in a certain physical setting, the function $n = +\sqrt{\eta}$ is the index of refraction. The initial motivation for this thesis was the inverse scattering problem and its uniqueness. However, for any inverse problem, one first has to understand the corresponding direct problem. In the end, the balance between treating the direct and inverse problem is left fairly even. This thesis closely follows books by Colton and Kress, and Kirsch. The first chapter is the introduction, in which the overview of the thesis is presented and the working assumptions are made. The second chapter treats the needed preliminaries, such as compact operators, Sobolev spaces, Fredholm alternative, spherical harmonics and spherical Bessel functions. In particular these are needed in various results of chapter three, in which the direct scattering problem is considered. After motivating and defining the direct scattering problem, the main goal is to prove its well-posedness. The uniqueness of the problem is proved by two results, Rellich's lemma and unique continuation principle. The Fredholm alternative is applied to prove existence of the solution on the basis of uniqueness. Equipped with the understanding of the direct scattering problem, the inverse scattering problem can be considered in the fourth chapter. After defining the inverse scattering problem, the uniqueness of the solution is considered. The proof is contrasted to the historically important paper by Calderón considering another kind of inverse problem. The proof consists of three lemmas, from which the second and third are directly used in proving the uniqueness of the inverse problem. The uniqueness of the inverse problem can be considered as the main result of this thesis.
-
(2013)Mathematics teaching has been an active field of research and development at the Department of Mathematics and Systems Analysis at Aalto University. This research has been motivated by a desire to increase the number of students that pass compulsory basic mathematics courses without compromising on standards. The courses aim to provide the engineering students with the mathematical skills needed in their degree programmes so it is essential that a proper foundation is laid. Since 2006, a web-based automated assessment system called STACK has been used on basic mathematics courses for supplementary exercises to aid learning at Aalto University. In this thesis, computer-aided mathematics teaching and, in particular, automated assessment are studied to investigate what effect attempting to solve online exercises has on mathematical proficiency. This is done by using a Granger causality test. For this, the first two of three basic courses are examined. The concepts relating to learning and computer-aided mathematics teaching as well as the developments, including Mumie, made at Aalto University are first presented. Then, the statistical methodology, the theoretical framework and the test procedure for Granger causality are described. The courses and data, which was collected from STACK and used to quantify mathematical proficiency for the Granger causality test, are then reviewed. Finally, the results and implications are presented. The Granger causality tests show that there exists a Granger-causal relationship such that mathematical proficiency affects the desire to attempt to solve exercises. This holds for both of the interpretations used for quantifying mathematical profiency and all variations of the penalty deducted for incorrect attempts. The results imply that the exercises are too difficult for the students and that students tend to give up quickly. Thus, the Granger causality tests produced statistically significant results to back up what teachers have always known: students are discouraged by failure, but encouraged by success. The results provide teachers with valuable information about the students' abilities and enable teachers to alter the teaching accordingly to better support the students' learning.
-
(2015)In this thesis we consider dynamic X-ray computed tomography (CT) for a two dimensional case. In X-ray CT we take X-ray projection images from many different directions and compute a reconstruction from those measurements. Sometimes the change over time in the imaged object needs to be taken into account, for example in cardiac imaging or in angiography. This is why we're looking at the dynamic (something changing in time, while taking the measurements) case. At the beginning of the thesis in chapter 2 we present some necessary theory on the subject. We first go through some general theory about inverse problems and the concentrate on X-ray CT. We talk about ill-posedness of inverse problems, regularization and the measurement proses in CT. Different measurement settings and the discretization of the continuous case are introduced. In chapter 3 we introduce a solution method for the problem: total variation regularization with Barzilai-Borwein minimization method. The Barzilai-Borwein minimization method is an iterative method and well suited for large scale problems. We also explain two different methods, the multi-resolution parameter choice method and the S-curve method, for choosing the regularization parameter needed in the minimization process. The 4th chapter shows the materials used in the thesis. We have both simulated and real measured data. The simulated data was created using a rendering software and for the real data we took X-ray projection images of a Lego robot. The results of the tests done on the data are shown in chapter 5. We did tests on both the simulated and the measured data with two di erent measurement settings. First assuming we have 9 xed source-detector pairs and then that we only one source-detector pair. For the case where we have only one pair, we tested the implemented regularization method we begin by considering the change in the imaged object to be periodic. Then we assume can only use some number of consecutive moments, based on the rate the object is chancing, to collect the data. Here we only get one X-ray projection image at each moment and we combine measurements from multiple di erent moments. In the last chapter, chapter 6, we discuss the results. We noticed that the regularization method is quite slow, at least partly because of the functions used in the implementation. The results obtained were quite good, especially for the simulated data. The simulated data had less details than the measured data, so it makes sense that we got better results with less data. Already with only four angles, we cold some details with the simulated data, and for the measured data with 8 angles and with 16 angles details were also visible in the case of measured data.
-
(2018)Purpose of the work is to give an elementary introduction to the Finite Element Method. We give an abstract mathematical formalization to the Finite Element Problem and work out how the method is a suitable method in approximating solutions of Partial Differential Equations. In Chapter 1 we give a concrete example code of Finite Element Method implementation with Matlab of a relatively simple problem. In Chapter 2 we give an abstract formulation to the problem. We introduce the necessary concepts in Functional Analysis. When Finite Element Method is interpreted in a suitable fashion, we can apply results of Functional Analysis in order to examine the properties of the solutions. We introduce the two equivalent formulations of weak formulation to differential equations: Galerkin’s formulation and minimizing problem. In addition we define necessary concepts regarding to certain function spaces. For example we define one crucial complete inner product space, namely Sobolev space. In Chapter 3 we define the building blocks of the element space: meshing and the elements. Elements consists of their geometric shape and of basis functions and functionals on the basis functions. We also introduce the concepts of interpolation and construct basis functions in one, two and three dimensions. In Chapter 4 we introduce implementation techniques in a rather broad sense. We introduce the crucial concepts of stiffness matrix and load vector. We introduce a procedure for implementing Poisson’s equation and Helmholt’z equation. We introduce one way of doing numerical integration by Gaussian quadrature points- and weights. We define the reference element and mathematical concepts relating to it. Reference element and Gaussian quadrature points are widely used techniques when implementing Finite Element Method with computer. In Chapter 5 we give a rigid analysis of convergence properties of Finite Element Method solution. We show that an arbitrary function in Sobolev space can be approximated arbitarily close by a certain polynomial, namely Sobolev polynomial. The accuracy of the approximation depends on the size of the mesh and degree of the polynomial. Polynomial approximation theory in Sobolev spaces have a connection to Finite Element Methods approximation properties through Cèa’s lemma. In Chapter 6 we give some examples of posteriori convergence properties. We compare Finite Element Method solution acquired with computer to the exact solution. Interesting convergence properties are found using linear- and cubic basis functions. Results seem to verify the properties acquired in Chapter 5.
-
(2013)Tutkielmassa tarkastellaan kompakteilla ja reunattomilla monistoilla määriteltyjä elliptisiä osittaisdifferentiaalioperaattoreita käyttäen hyväksi pseudodifferentiaalioperaattoreiden teoriaa. Tutkielman ensimmäisissä luvuissa määritellään tarvittavat pseudodifferentiaalioperaattoriluokat sekä esitetään joitakin osia näiden operaattoriluokkien teoriasta. Kahdessa viimeisessä luvussa määritellään elliptisten osittaisdifferentiaalioperaattoreiden kompleksiarvoiset potenssit ja tutkitaan näiden ominaisuuksia ensimmaisissä luvuissa esitetyn teorian valossa. Tärkeimpiä tavoitteita ovat operaattoriluokkien ja itseadjungoidussa tapauksessa myös spektraaliasymptotiikan selvittäminen.
-
(2017)The aim of this project is to investigate the hydra effect occurring in a population infected by a disease. First, I will explain what exactly the hydra effect is. Intuitively, higher mortality rate applied to a population will decrease the size of that population, but this is not always the case. Under some circumstances the population size might increase with higher mortality, causing the phenomenon called by Abrams and Matsuda (2005) the 'hydra effect', after the mythological beast, who grew two heads in place of one removed. Abrams (2009) lists in a few mechanisms underlying the hydra effect from which the one I will focus onis a temporal separation of mortality and density dependence. Most work on the hydra effect involved explicit increase of a death rate, for example by harvesting. The idea of this thesis is to investigate the existence of the hydra effect due to mortality increased not explicitly, but through a lethal disease. Such an approach has not been shown in any published work so far. Instead of harvesting, we will have a virulence, the disease-induced mortality. In this project, I fi rst briefly explain some theory underlying my model. In chapter 2 I look at disease-free population and bifurcation analysis when varying the birth rate. In chapter 3 I propose the model and continue with population dynamics analysis. I look at bifurcations of equilibria when varying birth rate, virulence and transmission rate. Then in section 3.4 I investigate whether it is possible to observe the hydra effect if there exists a trade-off between virulence and transmission rate, and derive a condition for transcritical and fold bifurcation to occur. In chapter 4 I focus on evolution of traits. First I study evolution of the pathogen, assuming the same trade-off as earlier. Finally I look at evolution of host's traits, immunity and birth rate, using Adaptive Dynamics framework (Geritz et al. 1998). I compare two possible trade-off functions and show that with a concave trade-off, the host will evolve to getting rid of the disease despite increasing its immunity.
-
(2013)The research question in this thesis concerns how well can epileptic seizures be detected using a single triaxial accelerometer attached to the wrist. This work was done in collaboration with Vivago Oy, who provided the watch that is capable of recording accelerometer data, and HUS, the Hospital District of Helsinki and Uusimaa. HUS provided the real world epilepsy datasets consisting of several days worth of data recorded by several epilepsy patients. The research problem was divided into three subproblems: feature extraction, sensor fusion, and activity classification. For feature extraction, the original accelerometer signal is divided into 5s long windows and discrete cosine transform (DCT) is applied to each axis so that periodic components are detected, also removing the effect of gravity vector and compressing the signal. Next, the DCT features of each axis are combined and principal component analysis (PCA) is applied, further compressing the signal. At this step the PCA theorem is also proven. After DCT and PCA steps, the need to consider for different orientations of the accelerometer is effectively eliminated. The last step is the classification of the signal into a seizure or non-seizure by using a support vector machine (SVM) classifier on the features produced by PCA. The combined model is referred to as the DPS model (DCT-PCA-SVM). The experiments were run on two kinds of datasets: artificial datasets recorded by three test subjects and the epilepsy datasets. The principal reason for recording artificial datasets was that the labeling of the seizures in the epilepsy dataset was practically impossible to match to the accelerometer data, rendering the supervised training phase for any model impossible. The artificial datasets were created so that one test subject produced the training set, recording data of ordinary daily activities and labeling these activities as non seizures, and then imitating a seizure and labeling this as a seizure. The second test subject recorded the daily activities, including potential false positives such as brushing teeth and washing hands, and imitating a seizure several times during this period. This validation set was then used for fine-tuning the DPS model parameters so that all of the seizures were detected along with as few false positives as possible. Third test subject recorded the test set, including 13 imitated seizures, to test the DPS model's ability to generalize on new and previously unseen data. The conclusion is that for the artificial test set, 12 out of 13, or 92%, of seizures were detected along with a reasonably low number of false positives. For the epilepsy dataset the results are inconclusive, due to not being able to utilize any part of it as a training set, but there are reasonable indications that at least some real seizures were detected. In order to verify the results, the DPS model would need to be trained on a larger and better labeled real world epilepsy dataset.
-
(2020)Efficient estimation and forecasting of the cash flow is an interest of pension insurance companies. At the turn of the year 2019 Finnish national Incomes Register was introduced and the payment cycle of TyEL (Employees Pensions Act) changed substantially. TyEL payments are calculated and paid monthly by all of the employers insured under TyEL after January 1st 2019. Vector autoregressive (VAR) models are one of the most used and successful multivariate time series models. They are widely used with economic and financial data due to the good forecasting abilities and the possibility of analysing dynamic structures between the variables of the model. The aim of this thesis is to determine whether a VAR model offers a good fit for predicting the incoming TyEL cash flow of a pension insurance company. With the monthly payment cycle arises a question of seasonality of the incoming TyEL cash flow, and thus the focus is on forecasting with seasonally varying data. The essential theory of VAR models is given. The forecast abilities are tested by building a VAR model for monthly, seasonally varying time series similar than the pension insurance companies would have and could use for the particular prediction problem.
-
(2016)This thesis starts from the Matsuda and Abrams paper 'Timid Consumers: Self-Extinction Due to Adaptive Change in Foraging and Anti-predator Effort.' Matsuda and Abrams show an example of evolutionary suicide due to the evolution of prey timidity in a predator-prey model with a Holling type II functional response. The key assumption they use to obtain evolutionary suicide is that the predator population size is kept constant. In this thesis, we relax this assumption by introducing a second type of prey to the model and investigate whether evolutionary suicide may still occur according to the evolution of timidity in the first prey species. To study this in the long-term, we use the theory of adaptive dynamics. Firstly, we analyse the limit case where the predator dynamics depend only upon the second prey species. Predators still hunt the evolving prey either as a snack or for entertainment without gaining any energy. Under this hypothesis, our model reproduces Matsuda and Abrams' results both qualitatively and quantitatively. Moreover, the introduction of the second type of prey allows for the appearance of limit cycles as dynamical attractors. We detect a fold bifurcation in the stability of the limit cycles when the first type of prey timidity increases. Thus, we are able to construct an example of evolutionary suicide on a fold bifurcation of limit cycles. Furthermore, we perform critical function analysis on the birth rate of the evolving prey as a function of prey timidity. We derive general conditions for the birth rate function that assure the occurrence of evolutionary suicide. Secondly, we analyse the full model without making any simplifying assumptions. Because of the analytical complexity of the system we use numerical bifurcation analysis to study bifurcations of the internal equilibria. More specifically, we utilize the package MatCont to carry out equilibria continuation. In this way, we are able to estimate the range of parameters where the results of Matsuda and Abrams' model hold. Starting from the parameter set that reproduce Matsuda and Abrams' results quantitatively we track the fold bifurcation and show that evolutionary suicide occurs for a considerably wide range of parameters. Moreover, we find that in the full model evolutionary suicide may also occur through a subcritical Hopf bifurcation.
-
(2016)Dispersal is a significant characteristic of life history of many species. Dispersal polymorphisms in nature propose that dispersal can have significant effect on species diversity. Evolution of dispersal is one probable reason to speciation. I consider an environment of well-connected and separate living sites and study how connectivity difference between different sites can affect the evolution of a two-dimensional dispersal strategy. Two-dimensionality means that the strategy consists of two separate traits. Adaptive dynamics is a mathematical framework for analysis of evolution. It assumes small phenotypic mutations and considers invasion possibility of a rare mutant. Generally invasion of a sufficiently similar mutant leads to substitution of the former resident. Consecutive invasion-substitution processes can lead to a singular strategy where directional evolution vanishes and evolution may stop or result in evolutionary branching. First I introduce some fundamental elements of adaptive dynamics. Then I construct a mathematical model for studying evolution. The model is created from the basis of the Hamilton-May model (1977). Last I analyse the model using tools I introduced previously. The analysis predicts evolution to a unique singular strategy in a monomorphic resident population. This singularity can be evolutionarily stable or branching depending on survival probabilities during different phases of dispersal. After branching the resident population becomes dimorphic. There seems to be always an evolutionarily stable dimorphic singularity. At the singularity one resident specializes fully to the well-connected sites while the other resides both types of sites. Connectivity difference of sites can lead to evolutionary branching in a monomorphic population and maintain a stable dimorphic population.
-
(2017)In this thesis we formulate and analyze a structured population model, with infectious disease dynamics, based on a similar life-cycle as with individuals of the Hamilton-May model. Each individual is characterized by a strategy vector (state dependent dispersal), and depending on the infectious status of the individual, it will use a strategy accordingly. We begin by assuming that every individual in the population has the same strategy, and as the population equilibriates we consider a mutant, with it's own strategy, entering the population, trying to invade. We apply the theory of Adaptive dynamics to model the invasion fitness of the mutant, and to analyze the evolution of dispersal. We show that evolutionary branching is possible, and when such an event happens, the evolutionary trajectories, described by the Canonical equation of Adaptive dynamics, of two strategies evolve into the extinction of one branch. The surviving branch then evolves to the extinction of the disease.
-
(2020)The Hawk-Dove game has been used as a model of situations of conflict in diverse fields as sociology, politics, economics as well as animal behavior. The iterated Hawk-Dove game has several rounds with payoff in each round. The thesis is about a version of the iterated Hawk-Dove game with the additional new feature that each player can unilaterally decide when to quit playing. After quitting, both players return to the pool of temporally inactive players. New games can be initiated by random pairing of individuals from within the pool. The decision of quitting is based on a rule that takes into account the actions of oneself or one's opponent, or on the payoffs received during the last or previous rounds of the present game. In this thesis, the quitting rule is that a player quits if its opponent acts as a Hawk. The additional feature of quitting dramatically changes the game dynamics of the traditional iterated Hawk-Dove game. The aim of the thesis is to study these changes. To that end we use elements of dynamical systems theory as well as game theory and adaptive dynamics. Game theory and adaptive dynamics are briefly introduced as background information for the model I present, providing all the essential tools to analyze it. Game theory provides an understanding of the role of payoffs and the notion of the evolutionarily stable strategies, as well as the mechanics of iterated games. Adaptive dynamics provides the tools to analyze the behavior of the mutant strategy, and under what conditions it can invade the resident population. It focuses on the evolutionary success of the mutant in the environment set by the current resident. In the standard iterated Hawk-Dove game, always play Dove (all-Dove) is a losing strategy. The main result of my model is that strategies such as all-Dove and mixed strategy profiles that are also not considered as worthwhile strategies in the standard iterated Hawk-Dove game can be worthwhile when quitting and the pool are part of the dynamics. Depending on the relations between the payoffs, these strategies can be victorious.
-
(2017)Recent biomathematical literature has suggested that, under the assumption of a trade-off between replication speed and fidelity, a pathogen can evolve to more than one optimal mutation rate. O'Fallon (2011) presents a particularly compelling case grounded in simulation. In this thesis, we treat the subject analytically, approaching it through the lens of adaptive dynamics. We formulate a within-host model of the pathogen load starting from assumptions at the genomic level, explicitly accounting for the fact that most mutations are deleterious and stunt growth. We single out the pathogen's mutation probability as the evolving trait that distinguishes strains from one another. Our between-host dynamics take the form of an SI model, first without superinfection and later with two types of non-smooth superinfection function. The pathogen's virulence and transmission rate are functions of the within-host equilibrium pathogen densities. In the case of our mechanistically defined superinfection function, we uncover evolutionary branching in conjunction with two transmission functions, one a caricatural (expansion) example, the other a more biologically realistic (logistic) one. Because of the non-smoothness of the mechanistic superinfection function, our branching points are actually one-sided ESSs à la Boldin and Diekmann (2014). When branching occurs, two strains with different mutation probabilities both ultimately persist on the evolutionary timescale.
-
(2020)The applied mathematical field of inverse problems studies how to recover unknown function from a set of possibly incomplete and noisy observations. One example of real-life inverse problem is image destriping, which is the process of removing stripes from images. The stripe noise is a very common phenomenon in various of fields such as satellite remote sensing or in dental x-ray imaging. In this thesis we study methods to remove the stripe noise from dental x-ray images. The stripes in the images are consequence of the geometry of our measurement and the sensor. In the x-ray imaging, the x-rays are sent on certain intensity through the measurable object and then the remaining intensity is measured using the x-ray detector. The detectors used in this thesis convert the remaining x-rays directly into electrical signals, which are then measured and finally processed into an image. We notice that the gained values behave according to an exponential model and use this knowledge to transform this into a nonlinear fitting problem. We study two linearization methods and three iterative methods. We examine the performance of the correction algorithms with both simulated and real stripe images. The results of the experiments show that although some of the fitting methods give better results in the least squares sense, the exponential prior leaves some visible line artefacts. This suggests that the methods can be further improved by applying suitable regularization method. We believe that this study is a good baseline for a better correction method.
-
(2018)X-ray computed tomography is an imaging method where the inner structure of an object is reconstructed from X-ray images taken from multiple directions around the object. When measurements from only a few measurement directions are available, the problem becomes severely ill-posed and requires regularization. This involves choosing a regularizer with desirable properties, as well as a value for the regularization parameter. In this thesis, sparsity promoting regularization with respect to the Haar wavelet basis is considered. The resulting minimization problem is solved using the iterative soft thresholding algorithm (ISTA). For the selection of the regularization parameter, it is assumed that an a priori known level of sparsity is available. The regularization parameter is then varied on each iteration of the algorithm so that the resulting reconstruction has the desired level of sparsity. This is achieved using variants of proportional-integral-derivative (PID) controllers. PID controllers require tuning to guarantee that the desired sparsity level is achieved. We study how different tunings affect the reconstruction process, and experiment with two adaptive variants of PID controllers: an adaptive integral controller, and a neural network based PID controller. The two adaptive methods are compared to each other, and additionally the adaptive integral controlled ISTA is compared to two classical reconstruction methods: filtered back projection and Tikhonov regularization. Computations are performed using both real and simulated X-ray data, with varying amounts of available measurement directions. The integral control is shown to be crucial for the regularization parameter selection while the proportional and derivative terms can be of use if additional control is required. Of the two adaptive variants, the adaptive integral control performs better with respect to all measured figures of merit. The adaptive integral controlled ISTA also outperforms the classical reconstruction methods both in terms of relative error and visual inspection when only a few measurement directions are available. The results indicate that variants of the PID controllers are effective for sparsity based regularization parameter selection. Adaptive variants are very end user friendly, avoiding the manual tuning of parameters. This makes it easier to use sparsity promoting regularization in real life applications. The PID control allows the regularization parameter to be selected during the iteration, thus making the overall reconstruction process relatively fast.
-
(2015)Tämän tutkielman tarkoituksena on esitellä focus stacking -algoritmien toimintaa. Focus stacking on digitaalinen kuvankäsittelymenetelmä, jossa yhdistetään useita eri kuvia, joissa jokaisessa eri osa kohteesta on tarkka. Kuvista erotetaan tarkat kohdat ja kootaan ne uudeksi kuvaksi. Näin saadaan muodostettua kuva, jossa koko kuvauksen kohde on mahdollisimman tarkka. Tässä tutkielmassa perehdytään kahteen eri tapaan tunnistaa kuvien tarkat kohdat, gradientin ja Fourier-muunnoksen avulla tapahtuvaan tunnistamiseen. Tutkielman alussa, luvussa kaksi, esitellään focus stacking -algoritmien toteuttamiseen tarvittavaa teoriaa. Luvun alussa selostetaan digitaaliseen kuvaan ja valokuvaukseen liittyvää termistöä. Tämän jälkeen esitellään gradientin teoriaa ja kerrotaan, miten gradientti voidaan laskea kuvalle erilaisten konvoluutioytimien avulla. Seuraavaksi selostetaan teoriaa Fourier-muunnoksesta, erityisesti diskreetissä tapauksessa. Lisäksi tarkastellaan nopeaa Fourier-muunnosta. Luvun lopuksi tarkastellaan vielä ali- ja ylipäästösuodatusta. Kolmannessa luvussa esitellään työssä käytettävä aineisto eli itse otetut kuvat, joiden kohteena on kolme laskettelevaa pupua. Jokaisessa kuvassa eri pupu on tarkka. Tavoitteena on saada muodostettua algoritmeja käyttäen kuva, jossa kaikki kolme pupua olisivat mahdollisimman tarkkoja. Luvussa neljä esitellään menetelmät, joista gradienttimenetelmässä vertaillaan tarkkoja ja epätarkkoja alueita gradienttien magnitudeista laskettujen normien avulla ja Fourier-muunnosmenetelmässä Fourier-muunnosten normien avulla. Luvussa viisi esitellään molemmilla menetelmillä eri parametrien arvoilla saatuja tuloksia. Käytettävissä algoritmeissa on kolme muutettavaa parametria, kerrallaan tarkasteltavan alueen koko, kynnys ja Fourier-muunnosmenetelmässä vielä ylipäästösuodatuksen rajataajuus. Kuvia vertaillaan siis aina tarkasteltavan alueen kokoinen pala kerrallaan. Kynnys tarkoittaa lukua, jonka alle jääviä normeja vastaavat alueet valitaan esimerkiksi ensimmäisestä kuvasta. Nämä alueet ovat siis kaikissa kuvissa epätarkkoja. Kynnyksen avulla varmistetaan, että epätarkasta alueesta saadaan tasainen. Johtopäätöksissä, luvussa kuusi, tarkastellaan saatuja tuloksia, jotka ovat yllättävän hyviä. Parhaan gradienttimenetelmällä ja parhaan Fourier-muunnosmenetelmällä saadun kuvan välillä ei ole suurta eroa. Kummassakin kuvassa on joitakin virheitä eli epätarkkoja pikseleitä. Gradienttimenetelmässä tarkkojen alueiden reunat tulevat helpommin tasaisemman näköisiksi kuin Fourier-muunnosmenetelmässä, mutta kuviin jää usein kokonaisia epätarkkoja alueita. Fourier-muunnosmenetelmässä reunoihin jää usein pieniä epäsäännöllisen muotoisia epätarkkoja alueita. Parametrien arvoista erityisesti tarkasteltavan alueen koolla ja ylipäästösuodatuksen rajataajuudella on suuri merkitys tuloksiin.
-
(2017)Mendesin et. al. esittää julkaisussa No-arbitrage, leverage and completeness in a fractional volatility model (2015) markkinamallin, jossa osakkeen hinnan volatiliteettiprosessi on rakennettu fraktionaalisesta Brownin liikkeestä. Mendes et. al. todistaa tällaisen markkinamallin olevan arbiraasivapaa. Markkinamallin täydellisyys puolestaan riippuu siitä, onko log-hinnan ja volatiliteettiprosessin satunnaisuus luotu samasta vai kahdesta riippumattomasta prosessista. Tässä työssä käydään läpi nuo todistukset, ja niiden vaatima matematiikka: stokastinen integraali, Itôn kaava ja rahoitusteorian kaksi ensimmäistä päälausetta. Käsittelemme lisäksi Girsanovin lauseen sekä lukuisia ehtoja, joiden pätiessä lokaalista martingaalista muodostettu stokastinen eksponentiaali on martingaali.
Now showing items 21-40 of 143