Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Nousiainen, Katri (2018)
    The human brain is divided into left and right hemisphere, and there are functional differences between the hemispheres. A hemispheric difference is called the lateralization of the brain function, and the degree of lateralization is described by the laterality index. The most investigated domain of the lateralized brain functions is language, which is a left hemisphere dominant function in the majority of the population. Functional magnetic resonance imaging provides a noninvasive method for studying the brain functions indirectly through the bloodoxygenation-level-dependent effect. The language-related functional magnetic resonance imaging can be used in the localization of the Broca’s speech area and determination of the dominant hemisphere in epileptic patients. The purpose of this thesis is to assess a method for calculating the laterality index from functional magnetic resonance imaging data. The data is acquired during three language task paradigms with five subjects and analyzed statistically. The methods used for the laterality index calculations are reviewed, and a new calculation method is presented. Result tables of laterality indices and hemispheric dominances per used regions of interest are generated. The presented laterality index calculation method successfully determined the speech laterality of three subjects out of five as a left hemispheric dominance. The language laterality of two subjects was not successful due to corrupted functional data and contradicted results between different paradigms. The major source of error is the subject’s head motion during the functional imaging. Together with the information about the head motion’s extent, the generated table could provide relevant extra information to epileptic patients’ functional magnetic resonance imaging data and could serve for clinical purposes in the future.
  • Sirviö, Robert (2016)
    Measuring risk is mandatory in every form of responsible asset management; be it mitigating losses or maximizing performance, the level of risk dictates the magnitude of the effect of the strategy the asset manager has chosen to execute. Many common risk measures rely on simple statistics computed from historic data. In this thesis, we present a more dynamic risk measure explicitly aimed at the commodity futures market. The basis of our risk measure is built on a stochastic model of the commodity spot price, namely the Schwartz two-factor model. The model is essentially determined by a system of stochastic differential equations, where the spot price and the convenience yield of the commodity are modelled separately. The spot price is modelled as a Geometric Brownian Motion with a correction factor (the convenience yield) applied to the drift of the process, whereas the convenience yield is modelled as an Ornstein-Uhlenbeck process. Within this framework, we show that the price of a commodity futures contract has a closed form solution. The pricing of futures contracts works as a coupling between the unobservable spot price and the observable futures contract price, rendering model fitting and filtering techniques applicable to our theoretic model. The parameter fitting of the system parameters of our model is done by utilizing the prediction error decomposition algorithm. The core of the algorithm is actually obtained from a by-product of a filtering algorithm called Kalman filter; the Kalman filter enables the extraction of the likelihood of a single parameter set. By subjecting the likelihood extraction process to numerical optimization, the optimal parameter set is acquired, given that the process converges. Once we have attained the optimal parameter sets for all of the commodity futures included in the portfolio, we are ready to perform the risk measurement procedure. The first phase of the process is to generate multiple future trajectories of the commodity spot prices and convenience yields. The trajectories are then subjected to the trading algorithm, generating a distribution of the returns for every commodity. Finally, the distributions are aggregated, resulting in a returns distribution on a portfolio level for a given target time frame. We show that the properties of this distribution can be used as an indicator for possible anomalies in the returns within the given time frame.
  • Turunen, Tarja (2023)
    Norway spruce (Picea abies (L.) Karst.) is one of the economically most important tree species in Finland. It is known to be drought-sensitive species and expected to suffer from the warming climate. In addition, warmer temperatures benefit pest insect Eurasian spruce bark beetle (Ips typographus L.) and pathogen Heterobasidion parviporum, which both use Norway spruce as their host and can make the future of Norway spuce in Finland even more difficult. In this thesis, adult Norway spruce mortality was studied from false colour aerial photographs taken in years between 2010 and 2021. Dead trees were detected from the photos by visual inspection, and mortality was calculated based on the difference in the number of dead trees in the photos from different years. The aim was to find out if Norway spruce mortality in Finland had increased over time, and what were the factors that had been driving tree mortality. The results indicate that tree mortality was the highest in the last third of the studied 10-year period, so it was concluded that tree mortality had increased over time. Various possible tree mortality drivers were analysed and found to be connected to tree mortality. Each driver was analysed individually by testing correlation with tree mortality. In addition, linear regression analysis and segmented linear regression with one breakpoint were used with the continuous variables. Increased tree mortality correlated with higher stand mean age, mean height, mean diamater, and mean volume, supporting the findings in earlier research. Mortality was connected to the proportion of different tree species in the stand: the higher the proportion of spruce, the higher the mortality, and the higher the proportion of deciduous trees, the lower the mortality. Of different fertility classes, tree mortality was the highest in the second most fertile class, herb-rich heat forest, and mortality decreased with decreasing fertility. Dead trees were also found to be located closer to stand edges than the stand centroid. Increased temperature resulted in increased mortality. Increased vapour pressure deficit (VPD) and drought, which was analysed with Standardized Precipitation Evapotranspiration Index (SPEI) of different time scales, were also connected with increased tree mortality. Further research is required for understanding and quantifying the joint effect of all the interacting mortality drivers. Nevertheless, it seems that for Norway spruce, the warmer future with increased mortality is already here, and it should be taken into consideration in forest management. Favouring mixed stands could be one of the solutions to help Norway spruce survive in the warming climate.
  • Helgadóttir, Steinunn (2023)
    Clean snow has the highest albedo of any natural surface, making snow-covered glaciers an important component of the Earth´s energy balance. However, the presence of light absorbing impurities such as mineral dust on glacier surfaces alters their reflective properties leading to a reduction in albedo, consequently increasing absorption of incoming solar radiation which further impacts the glacier surface mass balance (SMB). Icelandic glaciers portray a high annual and inter-annual variability in SMB due to climate variability but deposition of mineral dust originating from glaciofluvial dust hotspots can have large impacts on summer ablation. The frequency of dust storms and deposition is then controlled by high velocity winds and prolonged dry periods. Additionally, Icelandic mineral dust contains high amount of iron and iron oxides which makes it extremely light absorbing. An extensive dust event occurred over the southwest outlets of Vatnajökull ice cap during early July of 2022 causing surface darkening. To investigate the impact of the dust event on the melt season SMB, this study used automatic weather station (AWS) data from three different sites on Tungnaárjökull glacier, a SW-outlet of Vatnajökull ice cap. Daily melt was estimated with a simple snow-melt model and AWS data. To quantify the overall impact of dust on the melt rates, albedo from 2015 melt season for the three AWS sites was used to simulate the surface albedo for a dust-free surface during the 2022 melt season. Essentially, the dust event caused melt enhancement of almost 1.5 m water equivalent (mwe) above 1000 m elevation. As Icelandic glaciers exhibit large spatial variations over the melt season, the SMB sensitivity to dust deposition varied with elevation, being strongest at the uppermost site. Additionally, the sensitivity to timing of dust event were investigated which demonstrated that earlier occurrence increases the melt while later occurrence reduces the melt, compared to the July event. The results of this study reveal the impact of positive radiative forcing on SMB of Tungn
  • Vázquez Mireles, Sigifredo (2021)
    Piperine represents the major plant alkaloid encountered in various Piperaceae species and has received in recent years considerable attention because of its broad range of favorable biological and pharmacological activities, including antioxidant, immunostimulant, bioavailability-enhancing and anti-carcinogenic properties. The literature part of this thesis gives a selective overview of advanced methods for the quantitative analysis of piperine in plant-base materials, and various approaches employed for instrumental analysis, including spectroscopic, chromatographic, and electrochemical techniques. An effort was made to evaluate the potential of the reported methods based on the analytical figures of merit, such as total sample throughput capacity, analytical range, precision, accuracy, limit of detection and limit of quantification. The objective of the experimental part of the thesis focused on the development of a convenient, robust, simple, efficient and reliable method to quantify piperine in pepper fruits. The analytical method established in this thesis involves liberation of piperine by continuous liquid extraction of ground pepper fruits with methanol, and cleanup of the crude extracts with reversed phase solid phase extraction. Analyte quantitation was accomplished using gradient reversed phase High Performance Liquid Chromatography with mass spectrometric detection, using Electrospray Ionization-Ion Trap Mass Spectrometry. To enable reliable internal standardization, deuterium labelled piperine surrogate (piperine-D10) was synthesized from piperine in three steps in a reasonable overall yield (65 %) and standard-level purity (99.7 %). It may be worth mentioning that the commercial market value of the amount of piperine-D10 synthesized in-house exceeds 167,400 euros. One of the major challenges encountered during the development and optimization of the analytical method was the extreme photosensitivity of piperine and piperine-D10, both suffering in solution extensive photoisomerization upon exposure to ambient light within matter of minutes. This issue was addressed by carrying out all tasks associated with synthesis, sample preparation and analytical measurements under dark conditions. For the preparation of calibrators, a fully automated procedure was developed, being controlled by custom-written injector programs and executed in the light-protected sample compartment of a conventional autosampler module. In terms of merits, the developed analytical method offers good sample throughput capacity (run time 20 min, retention time 8.2 min), excellent selectivity and high sensitivity (Limit of Detection= 0.012 ppm, Limit of Quantification= 0.2 ppm). The method is applicable over a linear range of 0.4 to 20 ng of injected mass (r2= 0.999). The stability of standards and fully processed samples was found to be excellent, with less than 5% of variations in concentrations occurring after a 3-week (calibrators) or 4-month (samples) storage at 4 °C and 23 °C respectively, under dark conditions. Intra-day repeatability were better than 2.95 %. Preliminary validation data also suggest satisfactory inter-operator reproducibility. To test the applicability of the developed LC-MS method, it was employed to quantify piperine in a set of 15 pepper fruit samples, including black, white, red and green varieties of round and long peppers, purchased from local markets and retailers. The piperine contents obtained were in the range of 17.28 to 56.25 mg/g (piperine/minced sample) and generally in good agreement with the values reported in the scientific literature. It is justified to assume that the developed analytical method may directly be applicable to the quantitation of related pepper alkaloids in herbal commodities, and after some modifications in the sample preparation strategy, also for the monitoring of piperine in biological fluids, such as serum and urine.
  • Kallonen, Kimmo (2019)
    Quarks and gluons are elementary particles called partons, which produce collimated sprays of particles when protons are collided head-on at the Large Hadron Collider. These observable signatures of the quarks and gluons are called jets and are recorded by huge particle detectors, such as the Compact Muon Solenoid. The reconstruction of the jets from detector signals attempts to trace the particle-level information all the way back to the level of the initial collision event with the initating partons. Jets originating from gluons and the three lightest quarks are very similar to each other, only exhibiting subtle differences caused by the fact that gluons radiate more intensely. Quark/gluon jet discrimination algorithms are dedicated to identifying these two types of jets. Traditionally, likelihood-based quark/gluon discriminators have been used. While machine learning is nothing new to the high energy physics community, the advent of deep neural networks caused an upheaval and they are now being implemented to take on various tasks across the research field, including quark/gluon discrimination. In this thesis, three different deep neural network models are presented and their comparative performance in quark/gluon discrimination is evaluated in seven different bins of varying jet transverse momentum and pseudorapidity. The performance of a likelihood-based discriminator is used as a benchmark. Deep neural networks prove to provide excellent performance in quark/gluon discrimination, with a jet image-based visual recognition model being the most robust and offering the largest performance improvement over the benchmark discriminator.
  • Sirén, Saija (2015)
    Lipids can be found in all living organisms, and complex lipids are typically determined from biological samples and food products. Samples are usually prepared prior to analysis. Liquid-liquid extraction (LLE) is obviously the most often used technique to isolate lipids. Two fundamental protocols are Folch and Bligh & Dyer methods. In both methods, the extraction is based on lipid partitioning between chloroform and water-methanol phases. Methyl-tert-butyl ether offers an environmentally friendly alternative to chloroform. Total lipid fraction can be further separated by solid-phase extraction. Complex lipids are typically isolated from other lipid species with silica SPE cartridges. Three main techniques used in quantitative determination of complex lipids are thin layer chromatography (TLC), high performance liquid chromatography (HPLC) and direct infusion mass spectrometry (MS). Thin layer chromatography is a traditional technique, but its applicability is limited due to poor resolution and requirement of post-column derivatization. Instead, HPLC provides an efficient separation and it is easily coupled with several detectors. HPLC methods are the most commonly used in lipid analysis. Direct infusion mass spectrometry is the incoming technique. Lipid molecules can be precisely identified during the fast measurement. Other advantages are excellent selectivity and sensitivity. New method for glycolipids was developed during the experimental period. Glycolipids were isolated from bio oil samples using solid phase extraction cartridges. Normal phase liquid chromatography was utilized to separate glycolipids, and detection was carried out with parallel tandem mass spectrometry (MS/MS) and evaporative light scattering detection (ELSD). Quantification was based on ELSD measurements, whereas MS/MS was adopted to confirm the identification. Developed method was validated and following parameters were determined: linearity, trueness, precision, measurement uncertainty, detection and quantification limits. Precisions were successful and they were mainly between 5-15 %. Trueness results were however more undesired, because measured concentrations were typically higher than theoretical concentrations. Results were dependent on analyte, but generally they varied between 66 % and even 194 %. Validation pointed out that method needs further development. Mass spectrometric quantification can be considered, if appropriate internal standards would be available.
  • Enckell, Anastasia (2023)
    Numerical techniques have become powerful tools for studying quantum systems. Eventually, quantum computers may enable novel ways to perform numerical simulations and conquer problems that arise in classical simulations of highly entangled matter. Simple one dimensional systems of low entanglement are efficiently simulatable on a classical computer using tensor networks. This kind of toy simulations also give us the opportunity to study the methods of quantum simulations, such as different transformation techniques and optimization algorithms that could be beneficial for the near term quantum technologies. In this thesis, we study a theoretical framework for a fermionic quantum simulation and simulate the real-time evolution of particles governed by the Gross-Neveu model in one-dimension. To simulate the Gross-Neveu model classically, we use the Matrix Product State (MPS) method. Starting from the continuum case, we discretise the model by putting it on a lattice and encode the time evolution operator with the help of fermion-to-qubit transformations, Jordan-Wigner and Bravyi-Kitaev. The simulation results are visualised as plots of probability density. The results indicate the expected flavour and spatial symmetry of the system. The comparison of the two transformations show better performance of the Jordan-Wigner transformation before and after the gate reduction.
  • Hotari, Juho (2024)
    Quantum computing has an enormous potential in machine learning, where problems can quickly scale to be intractable for classical computation. Quantum machine learning is research area that combines the interplay of ideas from quantum computing and machine learning. Powerful and useful machine learning is dependent on having large-scale datasets used to train the models to be able to solve real-life problems. Currently, quantum machine learning lacks a plethora of large-scale quantum datasets required to further develop the models and test the quantum machine learning algorithms. Lack of large datasets is currently limiting the quantum advantage in the field of quantum machine learning. In this thesis, the concept of quantum data and different types of applied quantum datasets used to develop quantum machine learning models is studied. The research methodology is based on a systematic and comparative literature review of the state of the art articles in quantum computing and quantum machine learning in the recent years. We classify datasets into inherent and non-inherent quantum data based on the nature of the data. The preliminary literature review addresses patterns in the applied quantum machine learning. Testing and benchmarking QML models primarily uses non-inherent quantum data, or classical data encoded into the quantum system, while separate research is focused on generating inherent quantum datasets.
  • Haataja, Hanna (2016)
    In this thesis we introduce the Coleman-Weinberg mechanism through sample calculations. We calculate the effective potential in the massless scalar theory and massless quantum electrodynamics. After sample calculations, we walk through simple model in which the scalar particle, that breaks the scale invariance, resides at the hidden sector. Before we go into calculations we introduce basic concepts of the quantum field theory. In that context we discuss interaction of the fields and the Feynman rules for the Feynman diagrams. Afterwards we introduce the thermal field theory and calculate the effective potential in two cases, massive scalar theory and the Standard Model without fermions. We introduce the procedure how to calculate the effective potential, which contains ring diagram contributions. Motivation for this is knowledge of that sometimes the spontaneously broken symmetries are restored in the high temperature regime. If the phase transition between broken-symmetry and full-symmetry phase is first order phase transition baryogenesis can happen. Using the methods introduced in this thesis the Standard Model extensions that contain hidden sectors can be analyzed.
  • Hernandez Serrano, Ainhoa (2023)
    Using quantum algorithms to carry out ML tasks is what is known as Quantum Machine Learning (QML) and the methods developed within this field have the potential to outperform their classical counterparts in solving certain learning problems. The development of the field is partly dependent on that of a functional quantum random access memory (QRAM), called for by some of the algorithms devised. Such a device would store data in a superposition and could then be queried when algorithms require it, similarly to its classical counterpart, allowing for efficient data access. Taking an axiomatic approach to QRAM, this thesis provides the main considerations, assumptions and results regarding QRAM and yields a QRAM handbook and comprehensive introduction to the literature pertaining to it.
  • Lintulampi, Anssi (2023)
    Secure data transmissions are crucial part of modern cloud services and data infrastructures. Securing communication channel for data transmission is possible if communicating parties can securely exchange a secret key. Secret key is used in a symmetric encryption algorithm to encrypt digital data that is transmitted over an unprotected channel. Quantum key distribution is a method that communicating parties can use to securely share a secret cryptographic key with each other. Security of quantum key distribution requires that the communicating parties are able to ensure the authenticity and integrity of messages they exchange on the classical channel during the protocol. For this purpose they use cryptographic authentication techniques such as digital signatures or message authentication codes. Development of quantum computers affects how traditional authentication solutions can be used in the future. For example, traditional digital signature algorithms will become vulnerable if quantum computer is used to solve the underlying mathematical problems. Authentication solutions used in the quantum key distribution should be safe even against adversaries with a quantum computer to ensure the security of the protocol. This master’s thesis studies quantum safe authentication methods that could be used with quantum key distribution. Two different quantum safe authentication methods were implemented for quantum key distribution protocol BB84. The implemented authentication methods are compared based on their speed and size of the authenticated messages. Security aspects related to the authentication are also evaluated. Results show that both authentication methods are suitable to be used in quantum key distribution. Results also show that the implemented method that uses message authentication codes is faster than the method using digital signatures.
  • Veltheim, Otto (2022)
    The measurement of quantum states has been a widely studied problem ever since the discovery of quantum mechanics. In general, we can only measure a quantum state once as the measurement itself alters the state and, consequently, we lose information about the original state of the system in the process. Furthermore, this single measurement cannot uncover every detail about the system's state and thus, we get only a limited description of the system. However, there are physical processes, e.g., a quantum circuit, which can be expected to create the same state over and over again. This allows us to measure multiple identical copies of the same system in order to gain a fuller characterization of the state. This process of diagnosing a quantum state through measurements is known as quantum state tomography. However, even if we are able to create identical copies of the same system, it is often preferable to keep the number of needed copies as low as possible. In this thesis, we will propose a method of optimising the measurements in this regard. The full description of the state requires determining multiple different observables of the system. These observables can be measured from the same copy of the system only if they commute with each other. As the commutation relation is not transitive, it is often quite complicated to find the best way to match the observables with each other according to these commutation relations. This can be quite handily illustrated with graphs. Moreover, the best way to divide the observables into commuting sets can then be reduced to a well-known graph theoretical problem called graph colouring. Measuring the observables with acceptable accuracy also requires measuring each observable multiple times. This information can also be included in the graph colouring approach by using a generalisation called multicolouring. Our results show that this multicolouring approach can offer significant improvements in the number of needed copies when compared to some other known methods.
  • Järvinen, Matti (Helsingin yliopistoUniversity of HelsinkiHelsingfors universitet, 2004)
  • Haataja, Miika-Matias (2017)
    Interfaces of solid and liquid helium exhibit many physical phenomena. At very low temperatures the solid-liquid interface becomes mobile enough to allow a periodic melting-freezing wave to pro-pagate along the surface. These crystallization waves were experimentally confirmed in ^4He decades ago, but in ^3He they are only observable at extremely low temperatures (well below 0.5 mK). This presents a difficult technical challenge to create a measurement scheme with very low dissipation. We have developed a method to use a quartz tuning fork to probe oscillating helium surfaces. These mechanical oscillators are highly sensitive to interactions with the surrounding medium, which makes them extremely accurate sensors of many material properties. By tracking the fork's resonant frequency with two lock-in amplifiers, we have been able to attain a frequency resolution below 1 mHz. The shift in resonant frequency can then be used to calculate the corresponding change in surface level, if the interaction between the fork and the helium surface is understood. One of the main goals of this thesis was to create interaction models that could provide quantitative estimates for the calculations. Experimental results suggest that the liquid-vapour surface forms a column of superfluid that is suspended from the tip of the fork. Due to the extreme wetting properties of superfluids, the fork is also coated with a thin (∼ 300 Å) layer of helium. The added mass from this layer depends on the fork-surface distance. Oscillations of the surface level thus cause periodic change in the effective mass of the fork, which in turn modulates the resonant frequency. For the solid-liquid interface the interaction is based on the inviscid flow of superfluid around the moving fork. The added hydrodynamic mass increases when the fork oscillates closer to the solid surface. Crystallization waves below the fork will thus change the fork's resonant frequency. We were able to excite gravity-capillary and crystallization waves in ^4He with a bifilarly wound capacitor. Using the quartz tuning fork detection scheme we measured the spectrum of both types of waves at 10 mK. According to the interaction models developed in this thesis, the surface level resolution of this method was ∼ 10 μm for the gravity-capillary waves and ∼ 1 nm for the crystallization waves. Thanks to the low dissipation (∼ 20 pW) of the measurement scheme, our method is directly applicable in future ^3He experiments.
  • Heikkilä, Mikko (2016)
    Probabilistic graphical models are a versatile tool for doing statistical inference with complex models. The main impediment for their use, especially with more elaborate models, is the heavy computational cost incurred. The development of approximations that enable the use of graphical models in various tasks while requiring less computational resources is therefore an important area of research. In this thesis, we test one such recently proposed family of approximations, called quasi-pseudolikelihood (QPL). Graphical models come in two main variants: directed models and undirected models, of which the latter are also called Markov networks or Markov random fields. Here we focus solely on the undirected case with continuous valued variables. The specific inference task the QPL approximations target is model structure learning, i.e. learning the model dependence structure from data. In the theoretical part of the thesis, we define the basic concepts that underpin the use of graphical models and derive the general QPL approximation. As a novel contribution, we show that one member of the QPL approximation family is not consistent in the general case: asymptotically, for this QPL version, there exists a case where the learned dependence structure does not converge to the true model structure. In the empirical part of the thesis, we test two members of the QPL family on simulated datasets. We generate datasets from Ising models and Sherrington-Kirkpatrick models and try to learn them using QPL approximations. As a reference method, we use the well-established Graphical lasso (Glasso). Based on our results, the tested QPL approximations work well with relatively sparse dependence structures, while the more densely connected models, especially with weaker interaction strengths, present challenges that call for further research.
  • Suominen, Heikki (2022)
    Quantum computers are one of the most prominent emerging technologies of the 21st century. While several practical implementations of the qubit—the elemental unit of information in quantum computers—exist, the family of superconducting qubits remains one of the most promising platforms for scaled-up quantum computers. Lately, as the limiting factor of non-error-corrected quantum computers has began to shift from the number of qubits to gate fidelity, efficient control and readout parameter optimization has become a field of significant scientific interest. Since these procedures are multibranched and difficult to automate, a great deal of effort has gone into developing associated software, and even technologies such as machine learning are making an appearance in modern programs. In this thesis, we offer an extensive theoretical backround on superconducting transmon qubits, starting from the classical models of electronic circuits, and moving towards circuit quantum electrodynamics. We consider how the qubit is controlled, how its state is read out, and how the information contained in it can become corrupted by noise. We review theoretical models for characteristic parameters such as decoherence times, and see how control pulse parameters such as amplitude and rise time affect gate fidelity. We also discuss the procedure for experimentally obtaining characteristic qubit parameters, and the optimized randomized benchmarking for immediate tune-up (ORBIT) protocol for control pulse optimization, both in theory and alongside novel experimental results. The experiments are carried out with refactored characterization software and novel ORBIT software, using the premises and resources of the Quantum Computing and Devices (QCD) group at Aalto University. The refactoring project, together with the software used for the ORBIT protocol, aims to provide the QCD group with efficient and streamlined methods for finding characteristic qubit parameters and high-fidelity control pulses. In the last parts of the thesis, we evaluate the success and shortcomings of the introduced projects, and discuss future perspectives for the software.
  • Salminen, Reeta-Maaret Emilia (2013)
    In this study polymeric fluorescence quenchers were studied. The focus was on the quenching efficiency assessed with Stern-Volmer -plotting. Poly(4-vinylpyridine) and poly(nitrostyrene), poly(allylamine) and two other polymers were used as quenchers. Measurements with other than poly(nitrostyrene) were conducted in DMF. The measurements in aqueous solutions were conducted with different pH and with water and methanol as solvents for the pyrene. Using methanol as the solvent for pyrene made possible variation of pyrene concentration. Poly(4-vinylpyridine) was found to be an excellent quencher of fluorescence in aqueous solutions at pH 3.5, as was also poly(nitrostyrene) in DMF solutions. The Stern-Volmer -plot showed linear dependency of intensity ratio to quencher concentration, whereas the other polymeric quenchers tested showed downwards curvature implying that perhaps the polymer conformation prevents the fluorophore quencher interactions. Also the quenching of fluorescence was found to be independent of pH.
  • Halme, Topi (2021)
    In a quickest detection problem, the objective is to detect abrupt changes in a stochastic sequence as quickly as possible, while limiting rate of false alarms. The development of algorithms that after each observation decide to either stop and declare a change as having happened, or to continue the monitoring process has been an active line of research in mathematical statistics. The algorithms seek to optimally balance the inherent trade-off between the average detection delay in declaring a change and the likelihood of declaring a change prematurely. Change-point detection methods have applications in numerous domains, including monitoring the environment or the radio spectrum, target detection, financial markets, and others. Classical quickest detection theory focuses settings where only a single data stream is observed. In modern day applications facilitated by development of sensing technology, one may be tasked with monitoring multiple streams of data for changes simultaneously. Wireless sensor networks or mobile phones are examples of technology where devices can sense their local environment and transmit data in a sequential manner to some common fusion center (FC) or cloud for inference. When performing quickest detection tasks on multiple data streams in parallel, classical tools of quickest detection theory focusing on false alarm probability control may become insufficient. Instead, controlling the false discovery rate (FDR) has recently been proposed as a more useful and scalable error criterion. The FDR is the expected proportion of false discoveries (false alarms) among all discoveries. In this thesis, novel methods and theory related to quickest detection in multiple parallel data streams are presented. The methods aim to minimize detection delay while controlling the FDR. In addition, scenarios where not all of the devices communicating with the FC can remain operational and transmitting to the FC at all times are considered. The FC must choose which subset of data streams it wants to receive observations from at a given time instant. Intelligently choosing which devices to turn on and off may extend the devices’ battery life, which can be important in real-life applications, while affecting the detection performance only slightly. The performance of the proposed methods is demonstrated in numerical simulations to be superior to existing approaches. Additionally, the topic of multiple hypothesis testing in spatial domains is briefly addressed. In a multiple hypothesis testing problem, one tests multiple null hypotheses at once while trying to control a suitable error criterion, such as the FDR. In a spatial multiple hypothesis problem each tested hypothesis corresponds to e.g. a geographical location, and the non-null hypotheses may appear in spatially localized clusters. It is demonstrated that implementing a Bayesian approach that accounts for the spatial dependency between the hypotheses can greatly improve testing accuracy.
  • Nikula, Petter (2016)
    This thesis investigates the automated near real time science analysis performed at the INTEGRAL Science Data Centre. The structure of the Quick-Look Analysis pipeline and individual analysis stages are detailed. The stage performing pattern recognition for two-dimensional coordinate lists, i.e. source identification, is tested in-depth. The lists contain sources located in a randomly selected 9ÌŠ by 9ÌŠ area of the sky. Using the current live version and default parameters; a simulated new source was correctly identified 98% of the time, fields with no new sources produced false detections 8% of the time. The testing reveals two separate flaws; a code error and a methodological error. The sensitivity of recognizing that a new source has been detected is reduced because of the code error. The methodological error causes the algorithm to report the detection of previously unknown sources where none exists. A possible solution is presented. New source detection was improved to well above 99% and false detections reduced below 2% with the new solution. A second methodological error causes the algorithm used to correct for the pointing error of the instrument to produce unreliable results. Fortuitously this problem is serious only for small pointing errors where the source matching algorithm is able to compensate for it.