Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Lehtinen, Sami (2016)
    This work is about the qualitative theory of autonomous ordinary differential equation (ODE) systems. The purpose of the work is threefold. First, it is intended to familiarize the reader with the essential theory of autonomous systems in dimension n. Second, it is hoped that the reader will learn the importance of planar autonomous systems, such the beautiful result of the Poincaré-Bendixson theorem. Third, since the theory is utilised in applied science, considerably space has been devoted to analytical methods that are used widely in applications. The fundamental theory of existence and uniqueness of solutions to ODE systems are presented in Chapter 2. Then, Chapter 3 treats with the essential theory of autonomous systems in dimension n, such as the orbits and the limit sets of solutions. In Chapter 4 we consider planar autonomous systems. What makes planar systems different from higher dimensions is the existence of Jordan Curve theorem, which has made it possible for the theory to go much further. In particular, the Poincaré-Bendixson theorem, which is a statement about the long-term behavior of solutions to an autonomous system in the plane. Note that the Jordan Curve theorem is stated without proof, since the proof is terribly difficult but the result is obvious. Lastly, in order not to lose sight of the applied side of the subject, Chapters 5 and 6 are devoted to analytical methods of autonomous systems. First, Chapter 5 treats with local stability analysis of an equilibrium. Then, in Chapter 6 we work with a relatively large study of an abnormal competing species model based on the science fiction movie The Terminator (1984), which should be taken with a pinch of salt. In its dystopian world there are two powerful forces of Men and the Terminator cyborgs trying to get completely rid of one another. Lack of space has, however, forced us to simplify some of the individual behaviour. These simplifications are partly justified by the fact that the purpose is to present how the theory can be applied even in a (hopefully) fictional situation and, of course, to answer the puzzling question whether the human race would stand a chance against the Terminators.
  • Store, Joakim (2020)
    In software configuration management, branching is a common practice, which can enable efficient parallel development between developers and teams. However, the developers might not be aware of the different branching practice options and how to exactly formulate a branching strategy. This could lead to an opposite effect towards productivity, and other issues as well. The focus of this thesis is in what branching practices are considered as beneficial, what affects their usability, what risks are involved, and how to plan these practices in a structured manner. There are plenty of branching practices presented in the literature, which can either complement each other or be completely incompatible. A lot of the practices' beneficiality depends on the surrounding context, such as the tools in use and project characteristics. The most relevant risk to branching is merge conflicts, but there are other risks as well. The approaches for planning a branching strategy, however, are found to be too narrow in the reviewed literature. Thus, Branching Strategy Formulation and Analysis Method (BSFAM) is proposed to help teams and organizations plan their branching strategy in a structured manner. Additionally, the issues of branching are explored in the context of an organization that has multiple concurrent projects ongoing for a single product. Information on this is gathered through a survey, semi-structured interviews, and available documentation. The issues that were found can be attributed to a lack of proper base strategy, difficulties in coordination and awareness, and test automation management in relation to branching. The proposed method is then applied in that same context in order to provide solutions to the organization's issues, and to provide an example case. BSFAM will be taken into use in upcoming projects in the organization, and it will be improved if necessary. If the proposed method is to be adopted more widely and its resulting information published, it could provide further research towards how different branching practices fit in different contexts. Additionally, it could help in new, generally better, branching practices to emerge.
  • Rantanen, Milla-Maarit (2020)
    Semiconductor radiation detectors are devices used to detect electromagnetic and particle radiation. The signal formation is based on the transportation of charges between the valence band and conduction band. The interaction between the detector material and the radiation generates free electrons and holes that move in opposite directions in the electric field applied between the electrodes. The movement of charges induces a current in the external electrical circuit, which can be used for particle identification, measurement of energy or momentum, timing, or tracking. There are several different detector materials and designs and, new options are continuously developed. Diamond is a detector material that has received a great amount of interest in many fields. This is due to its many unique properties. Many of them arise from the diamond crystal structure and the strength of the bond between the carbon atoms. The tight and rigid structure makes diamond a strong and durable material, which allows operation of diamond detectors in harsh radiation environments. This, combined with the fast signal formation and short response time makes diamond detector an excellent choice for high energy physics applications. The diamond structure leads also to a wide band gap. Thanks to the wide band bap, diamond detectors have low leakage current and they can be operated even in high temperatures without protection from surrounding light. Especially electrical properties of semiconductors strongly depend on the concentration of impurities and crystal defects. Determination of electrical properties can therefore be used to study the crystal quality of the material. The electrical properties of the material determine the safe operational region of the device and knowledge of the leakage current and the charge carrier transportation mechanism are required for optimized operation of detectors. Characterization of electrical properties is therefore an important part of semiconductor device fabrication. Electrical characterization should be done at different stages of the fabrication in order to detect problems at an early stage and to get an idea of what could have caused them. This work describes the quality assurance process of single crystal CVD (chemical vapour deposition) diamond detectors for the PPS-detectors for the CMS-experiment. The quality assurance process includes visual inspection of the diamond surfaces and dimensions by optical and cross polarized light microscopy, and electrical characterization by measurement of leakage current and CCE (charge collection efficiency). The CCE measurement setup was improved with a stage controller, which allows automatic measurement of CCE in several positions on the diamond detector. The operation of the new setup and the reproducibility of the results were studied by repeated measurements of a reference diamond. The setup could successfully be used to measure CCE over the whole diamond surface. However, the measurement uncertainty is quite large. Further work is needed to reduce the measurement uncertainty and to determine the correlation between observed defects and the measured electrical properties.
  • Aaltonen, Serja (Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2007)
    ALICE (A Large Ion Collider Experiment) is an experiment at CERN (European Organization for Nuclear Research), where a heavy-ion detector is dedicated to exploit the unique physics potential of nucleus-nucleus interactions at LHC (Large Hadron Collider) energies. In a part of that project, 716 so-called type V4 modules were assembles in Detector Laboratory of Helsinki Institute of Physics during the years 2004 - 2006. Altogether over a million detector strips has made this project the most massive particle detector project in the science history of Finland. One ALICE SSD module consists of a double-sided silicon sensor, two hybrids containing 12 HAL25 front end readout chips and some passive components, such has resistors and capacitors. The components are connected together by TAB (Tape Automated Bonding) microcables. The components of the modules were tested in every assembly phase with comparable electrical tests to ensure the reliable functioning of the detectors and to plot the possible problems. The components were accepted or rejected by the limits confirmed by ALICE collaboration. This study is concentrating on the test results of framed chips, hybrids and modules. The total yield of the framed chips is 90.8%, hybrids 96.1% and modules 86.2%. The individual test results have been investigated in the light of the known error sources that appeared during the project. After solving the problems appearing during the learning-curve of the project, the material problems, such as defected chip cables and sensors, seemed to induce the most of the assembly rejections. The problems were typically seen in tests as too many individual channel failures. Instead, the bonding failures rarely caused the rejections of any component. One sensor type among three different sensor manufacturers has proven to have lower quality than the others. The sensors of this manufacturer are very noisy and their depletion voltage are usually outside of the specification given to the manufacturers. Reaching 95% assembling yield during the module production demonstrates that the assembly process has been highly successful.
  • Abbas, Hassan (2018)
    Mobile users surpassing desktop users have tempted mobile network operators to deploy traffic shaping policies to utilize the resources efficiently. These policies have significantly lowered the quality of service of applications. Present systems can accurately detect the traffic discrimination of different application protocols for instance BitTorrent in context to HTTP protocol and extract the quality of the service statistically by comparing the data. This thesis proposes a method that tries to understand the system performance and applications behavior along with the network performance in requesting and delivering the desired quality of service. We devised a framework which tests an MNO (Mobile Network Operator) and their policies on the 4G Network regarding the Type of Service flags in the IP Header. We investigate whether the network path allows applications like Skype, WhatsApp, Facebook Messenger and Viber to set the Type of Service (DSCP Class) in their IP Header. We implemented the framework as an Android application which sets the DSCP Class in the IP Header for each respective application’s data. Our results show that major mobile network operators in Finland do not allow the applications to set DSCP classes in their IP Header for the better quality of service.
  • Nousiainen, Katri (2018)
    The human brain is divided into left and right hemisphere, and there are functional differences between the hemispheres. A hemispheric difference is called the lateralization of the brain function, and the degree of lateralization is described by the laterality index. The most investigated domain of the lateralized brain functions is language, which is a left hemisphere dominant function in the majority of the population. Functional magnetic resonance imaging provides a noninvasive method for studying the brain functions indirectly through the bloodoxygenation-level-dependent effect. The language-related functional magnetic resonance imaging can be used in the localization of the Broca’s speech area and determination of the dominant hemisphere in epileptic patients. The purpose of this thesis is to assess a method for calculating the laterality index from functional magnetic resonance imaging data. The data is acquired during three language task paradigms with five subjects and analyzed statistically. The methods used for the laterality index calculations are reviewed, and a new calculation method is presented. Result tables of laterality indices and hemispheric dominances per used regions of interest are generated. The presented laterality index calculation method successfully determined the speech laterality of three subjects out of five as a left hemispheric dominance. The language laterality of two subjects was not successful due to corrupted functional data and contradicted results between different paradigms. The major source of error is the subject’s head motion during the functional imaging. Together with the information about the head motion’s extent, the generated table could provide relevant extra information to epileptic patients’ functional magnetic resonance imaging data and could serve for clinical purposes in the future.
  • Sirviö, Robert (2016)
    Measuring risk is mandatory in every form of responsible asset management; be it mitigating losses or maximizing performance, the level of risk dictates the magnitude of the effect of the strategy the asset manager has chosen to execute. Many common risk measures rely on simple statistics computed from historic data. In this thesis, we present a more dynamic risk measure explicitly aimed at the commodity futures market. The basis of our risk measure is built on a stochastic model of the commodity spot price, namely the Schwartz two-factor model. The model is essentially determined by a system of stochastic differential equations, where the spot price and the convenience yield of the commodity are modelled separately. The spot price is modelled as a Geometric Brownian Motion with a correction factor (the convenience yield) applied to the drift of the process, whereas the convenience yield is modelled as an Ornstein-Uhlenbeck process. Within this framework, we show that the price of a commodity futures contract has a closed form solution. The pricing of futures contracts works as a coupling between the unobservable spot price and the observable futures contract price, rendering model fitting and filtering techniques applicable to our theoretic model. The parameter fitting of the system parameters of our model is done by utilizing the prediction error decomposition algorithm. The core of the algorithm is actually obtained from a by-product of a filtering algorithm called Kalman filter; the Kalman filter enables the extraction of the likelihood of a single parameter set. By subjecting the likelihood extraction process to numerical optimization, the optimal parameter set is acquired, given that the process converges. Once we have attained the optimal parameter sets for all of the commodity futures included in the portfolio, we are ready to perform the risk measurement procedure. The first phase of the process is to generate multiple future trajectories of the commodity spot prices and convenience yields. The trajectories are then subjected to the trading algorithm, generating a distribution of the returns for every commodity. Finally, the distributions are aggregated, resulting in a returns distribution on a portfolio level for a given target time frame. We show that the properties of this distribution can be used as an indicator for possible anomalies in the returns within the given time frame.
  • Turunen, Tarja (2023)
    Norway spruce (Picea abies (L.) Karst.) is one of the economically most important tree species in Finland. It is known to be drought-sensitive species and expected to suffer from the warming climate. In addition, warmer temperatures benefit pest insect Eurasian spruce bark beetle (Ips typographus L.) and pathogen Heterobasidion parviporum, which both use Norway spruce as their host and can make the future of Norway spuce in Finland even more difficult. In this thesis, adult Norway spruce mortality was studied from false colour aerial photographs taken in years between 2010 and 2021. Dead trees were detected from the photos by visual inspection, and mortality was calculated based on the difference in the number of dead trees in the photos from different years. The aim was to find out if Norway spruce mortality in Finland had increased over time, and what were the factors that had been driving tree mortality. The results indicate that tree mortality was the highest in the last third of the studied 10-year period, so it was concluded that tree mortality had increased over time. Various possible tree mortality drivers were analysed and found to be connected to tree mortality. Each driver was analysed individually by testing correlation with tree mortality. In addition, linear regression analysis and segmented linear regression with one breakpoint were used with the continuous variables. Increased tree mortality correlated with higher stand mean age, mean height, mean diamater, and mean volume, supporting the findings in earlier research. Mortality was connected to the proportion of different tree species in the stand: the higher the proportion of spruce, the higher the mortality, and the higher the proportion of deciduous trees, the lower the mortality. Of different fertility classes, tree mortality was the highest in the second most fertile class, herb-rich heat forest, and mortality decreased with decreasing fertility. Dead trees were also found to be located closer to stand edges than the stand centroid. Increased temperature resulted in increased mortality. Increased vapour pressure deficit (VPD) and drought, which was analysed with Standardized Precipitation Evapotranspiration Index (SPEI) of different time scales, were also connected with increased tree mortality. Further research is required for understanding and quantifying the joint effect of all the interacting mortality drivers. Nevertheless, it seems that for Norway spruce, the warmer future with increased mortality is already here, and it should be taken into consideration in forest management. Favouring mixed stands could be one of the solutions to help Norway spruce survive in the warming climate.
  • Helgadóttir, Steinunn (2023)
    Clean snow has the highest albedo of any natural surface, making snow-covered glaciers an important component of the Earth´s energy balance. However, the presence of light absorbing impurities such as mineral dust on glacier surfaces alters their reflective properties leading to a reduction in albedo, consequently increasing absorption of incoming solar radiation which further impacts the glacier surface mass balance (SMB). Icelandic glaciers portray a high annual and inter-annual variability in SMB due to climate variability but deposition of mineral dust originating from glaciofluvial dust hotspots can have large impacts on summer ablation. The frequency of dust storms and deposition is then controlled by high velocity winds and prolonged dry periods. Additionally, Icelandic mineral dust contains high amount of iron and iron oxides which makes it extremely light absorbing. An extensive dust event occurred over the southwest outlets of Vatnajökull ice cap during early July of 2022 causing surface darkening. To investigate the impact of the dust event on the melt season SMB, this study used automatic weather station (AWS) data from three different sites on Tungnaárjökull glacier, a SW-outlet of Vatnajökull ice cap. Daily melt was estimated with a simple snow-melt model and AWS data. To quantify the overall impact of dust on the melt rates, albedo from 2015 melt season for the three AWS sites was used to simulate the surface albedo for a dust-free surface during the 2022 melt season. Essentially, the dust event caused melt enhancement of almost 1.5 m water equivalent (mwe) above 1000 m elevation. As Icelandic glaciers exhibit large spatial variations over the melt season, the SMB sensitivity to dust deposition varied with elevation, being strongest at the uppermost site. Additionally, the sensitivity to timing of dust event were investigated which demonstrated that earlier occurrence increases the melt while later occurrence reduces the melt, compared to the July event. The results of this study reveal the impact of positive radiative forcing on SMB of Tungn
  • Vázquez Mireles, Sigifredo (2021)
    Piperine represents the major plant alkaloid encountered in various Piperaceae species and has received in recent years considerable attention because of its broad range of favorable biological and pharmacological activities, including antioxidant, immunostimulant, bioavailability-enhancing and anti-carcinogenic properties. The literature part of this thesis gives a selective overview of advanced methods for the quantitative analysis of piperine in plant-base materials, and various approaches employed for instrumental analysis, including spectroscopic, chromatographic, and electrochemical techniques. An effort was made to evaluate the potential of the reported methods based on the analytical figures of merit, such as total sample throughput capacity, analytical range, precision, accuracy, limit of detection and limit of quantification. The objective of the experimental part of the thesis focused on the development of a convenient, robust, simple, efficient and reliable method to quantify piperine in pepper fruits. The analytical method established in this thesis involves liberation of piperine by continuous liquid extraction of ground pepper fruits with methanol, and cleanup of the crude extracts with reversed phase solid phase extraction. Analyte quantitation was accomplished using gradient reversed phase High Performance Liquid Chromatography with mass spectrometric detection, using Electrospray Ionization-Ion Trap Mass Spectrometry. To enable reliable internal standardization, deuterium labelled piperine surrogate (piperine-D10) was synthesized from piperine in three steps in a reasonable overall yield (65 %) and standard-level purity (99.7 %). It may be worth mentioning that the commercial market value of the amount of piperine-D10 synthesized in-house exceeds 167,400 euros. One of the major challenges encountered during the development and optimization of the analytical method was the extreme photosensitivity of piperine and piperine-D10, both suffering in solution extensive photoisomerization upon exposure to ambient light within matter of minutes. This issue was addressed by carrying out all tasks associated with synthesis, sample preparation and analytical measurements under dark conditions. For the preparation of calibrators, a fully automated procedure was developed, being controlled by custom-written injector programs and executed in the light-protected sample compartment of a conventional autosampler module. In terms of merits, the developed analytical method offers good sample throughput capacity (run time 20 min, retention time 8.2 min), excellent selectivity and high sensitivity (Limit of Detection= 0.012 ppm, Limit of Quantification= 0.2 ppm). The method is applicable over a linear range of 0.4 to 20 ng of injected mass (r2= 0.999). The stability of standards and fully processed samples was found to be excellent, with less than 5% of variations in concentrations occurring after a 3-week (calibrators) or 4-month (samples) storage at 4 °C and 23 °C respectively, under dark conditions. Intra-day repeatability were better than 2.95 %. Preliminary validation data also suggest satisfactory inter-operator reproducibility. To test the applicability of the developed LC-MS method, it was employed to quantify piperine in a set of 15 pepper fruit samples, including black, white, red and green varieties of round and long peppers, purchased from local markets and retailers. The piperine contents obtained were in the range of 17.28 to 56.25 mg/g (piperine/minced sample) and generally in good agreement with the values reported in the scientific literature. It is justified to assume that the developed analytical method may directly be applicable to the quantitation of related pepper alkaloids in herbal commodities, and after some modifications in the sample preparation strategy, also for the monitoring of piperine in biological fluids, such as serum and urine.
  • Kallonen, Kimmo (2019)
    Quarks and gluons are elementary particles called partons, which produce collimated sprays of particles when protons are collided head-on at the Large Hadron Collider. These observable signatures of the quarks and gluons are called jets and are recorded by huge particle detectors, such as the Compact Muon Solenoid. The reconstruction of the jets from detector signals attempts to trace the particle-level information all the way back to the level of the initial collision event with the initating partons. Jets originating from gluons and the three lightest quarks are very similar to each other, only exhibiting subtle differences caused by the fact that gluons radiate more intensely. Quark/gluon jet discrimination algorithms are dedicated to identifying these two types of jets. Traditionally, likelihood-based quark/gluon discriminators have been used. While machine learning is nothing new to the high energy physics community, the advent of deep neural networks caused an upheaval and they are now being implemented to take on various tasks across the research field, including quark/gluon discrimination. In this thesis, three different deep neural network models are presented and their comparative performance in quark/gluon discrimination is evaluated in seven different bins of varying jet transverse momentum and pseudorapidity. The performance of a likelihood-based discriminator is used as a benchmark. Deep neural networks prove to provide excellent performance in quark/gluon discrimination, with a jet image-based visual recognition model being the most robust and offering the largest performance improvement over the benchmark discriminator.
  • Sirén, Saija (2015)
    Lipids can be found in all living organisms, and complex lipids are typically determined from biological samples and food products. Samples are usually prepared prior to analysis. Liquid-liquid extraction (LLE) is obviously the most often used technique to isolate lipids. Two fundamental protocols are Folch and Bligh & Dyer methods. In both methods, the extraction is based on lipid partitioning between chloroform and water-methanol phases. Methyl-tert-butyl ether offers an environmentally friendly alternative to chloroform. Total lipid fraction can be further separated by solid-phase extraction. Complex lipids are typically isolated from other lipid species with silica SPE cartridges. Three main techniques used in quantitative determination of complex lipids are thin layer chromatography (TLC), high performance liquid chromatography (HPLC) and direct infusion mass spectrometry (MS). Thin layer chromatography is a traditional technique, but its applicability is limited due to poor resolution and requirement of post-column derivatization. Instead, HPLC provides an efficient separation and it is easily coupled with several detectors. HPLC methods are the most commonly used in lipid analysis. Direct infusion mass spectrometry is the incoming technique. Lipid molecules can be precisely identified during the fast measurement. Other advantages are excellent selectivity and sensitivity. New method for glycolipids was developed during the experimental period. Glycolipids were isolated from bio oil samples using solid phase extraction cartridges. Normal phase liquid chromatography was utilized to separate glycolipids, and detection was carried out with parallel tandem mass spectrometry (MS/MS) and evaporative light scattering detection (ELSD). Quantification was based on ELSD measurements, whereas MS/MS was adopted to confirm the identification. Developed method was validated and following parameters were determined: linearity, trueness, precision, measurement uncertainty, detection and quantification limits. Precisions were successful and they were mainly between 5-15 %. Trueness results were however more undesired, because measured concentrations were typically higher than theoretical concentrations. Results were dependent on analyte, but generally they varied between 66 % and even 194 %. Validation pointed out that method needs further development. Mass spectrometric quantification can be considered, if appropriate internal standards would be available.
  • Enckell, Anastasia (2023)
    Numerical techniques have become powerful tools for studying quantum systems. Eventually, quantum computers may enable novel ways to perform numerical simulations and conquer problems that arise in classical simulations of highly entangled matter. Simple one dimensional systems of low entanglement are efficiently simulatable on a classical computer using tensor networks. This kind of toy simulations also give us the opportunity to study the methods of quantum simulations, such as different transformation techniques and optimization algorithms that could be beneficial for the near term quantum technologies. In this thesis, we study a theoretical framework for a fermionic quantum simulation and simulate the real-time evolution of particles governed by the Gross-Neveu model in one-dimension. To simulate the Gross-Neveu model classically, we use the Matrix Product State (MPS) method. Starting from the continuum case, we discretise the model by putting it on a lattice and encode the time evolution operator with the help of fermion-to-qubit transformations, Jordan-Wigner and Bravyi-Kitaev. The simulation results are visualised as plots of probability density. The results indicate the expected flavour and spatial symmetry of the system. The comparison of the two transformations show better performance of the Jordan-Wigner transformation before and after the gate reduction.
  • Haataja, Hanna (2016)
    In this thesis we introduce the Coleman-Weinberg mechanism through sample calculations. We calculate the effective potential in the massless scalar theory and massless quantum electrodynamics. After sample calculations, we walk through simple model in which the scalar particle, that breaks the scale invariance, resides at the hidden sector. Before we go into calculations we introduce basic concepts of the quantum field theory. In that context we discuss interaction of the fields and the Feynman rules for the Feynman diagrams. Afterwards we introduce the thermal field theory and calculate the effective potential in two cases, massive scalar theory and the Standard Model without fermions. We introduce the procedure how to calculate the effective potential, which contains ring diagram contributions. Motivation for this is knowledge of that sometimes the spontaneously broken symmetries are restored in the high temperature regime. If the phase transition between broken-symmetry and full-symmetry phase is first order phase transition baryogenesis can happen. Using the methods introduced in this thesis the Standard Model extensions that contain hidden sectors can be analyzed.
  • Hernandez Serrano, Ainhoa (2023)
    Using quantum algorithms to carry out ML tasks is what is known as Quantum Machine Learning (QML) and the methods developed within this field have the potential to outperform their classical counterparts in solving certain learning problems. The development of the field is partly dependent on that of a functional quantum random access memory (QRAM), called for by some of the algorithms devised. Such a device would store data in a superposition and could then be queried when algorithms require it, similarly to its classical counterpart, allowing for efficient data access. Taking an axiomatic approach to QRAM, this thesis provides the main considerations, assumptions and results regarding QRAM and yields a QRAM handbook and comprehensive introduction to the literature pertaining to it.
  • Lintulampi, Anssi (2023)
    Secure data transmissions are crucial part of modern cloud services and data infrastructures. Securing communication channel for data transmission is possible if communicating parties can securely exchange a secret key. Secret key is used in a symmetric encryption algorithm to encrypt digital data that is transmitted over an unprotected channel. Quantum key distribution is a method that communicating parties can use to securely share a secret cryptographic key with each other. Security of quantum key distribution requires that the communicating parties are able to ensure the authenticity and integrity of messages they exchange on the classical channel during the protocol. For this purpose they use cryptographic authentication techniques such as digital signatures or message authentication codes. Development of quantum computers affects how traditional authentication solutions can be used in the future. For example, traditional digital signature algorithms will become vulnerable if quantum computer is used to solve the underlying mathematical problems. Authentication solutions used in the quantum key distribution should be safe even against adversaries with a quantum computer to ensure the security of the protocol. This master’s thesis studies quantum safe authentication methods that could be used with quantum key distribution. Two different quantum safe authentication methods were implemented for quantum key distribution protocol BB84. The implemented authentication methods are compared based on their speed and size of the authenticated messages. Security aspects related to the authentication are also evaluated. Results show that both authentication methods are suitable to be used in quantum key distribution. Results also show that the implemented method that uses message authentication codes is faster than the method using digital signatures.
  • Veltheim, Otto (2022)
    The measurement of quantum states has been a widely studied problem ever since the discovery of quantum mechanics. In general, we can only measure a quantum state once as the measurement itself alters the state and, consequently, we lose information about the original state of the system in the process. Furthermore, this single measurement cannot uncover every detail about the system's state and thus, we get only a limited description of the system. However, there are physical processes, e.g., a quantum circuit, which can be expected to create the same state over and over again. This allows us to measure multiple identical copies of the same system in order to gain a fuller characterization of the state. This process of diagnosing a quantum state through measurements is known as quantum state tomography. However, even if we are able to create identical copies of the same system, it is often preferable to keep the number of needed copies as low as possible. In this thesis, we will propose a method of optimising the measurements in this regard. The full description of the state requires determining multiple different observables of the system. These observables can be measured from the same copy of the system only if they commute with each other. As the commutation relation is not transitive, it is often quite complicated to find the best way to match the observables with each other according to these commutation relations. This can be quite handily illustrated with graphs. Moreover, the best way to divide the observables into commuting sets can then be reduced to a well-known graph theoretical problem called graph colouring. Measuring the observables with acceptable accuracy also requires measuring each observable multiple times. This information can also be included in the graph colouring approach by using a generalisation called multicolouring. Our results show that this multicolouring approach can offer significant improvements in the number of needed copies when compared to some other known methods.
  • Järvinen, Matti (Helsingin yliopistoUniversity of HelsinkiHelsingfors universitet, 2004)
  • Haataja, Miika-Matias (2017)
    Interfaces of solid and liquid helium exhibit many physical phenomena. At very low temperatures the solid-liquid interface becomes mobile enough to allow a periodic melting-freezing wave to pro-pagate along the surface. These crystallization waves were experimentally confirmed in ^4He decades ago, but in ^3He they are only observable at extremely low temperatures (well below 0.5 mK). This presents a difficult technical challenge to create a measurement scheme with very low dissipation. We have developed a method to use a quartz tuning fork to probe oscillating helium surfaces. These mechanical oscillators are highly sensitive to interactions with the surrounding medium, which makes them extremely accurate sensors of many material properties. By tracking the fork's resonant frequency with two lock-in amplifiers, we have been able to attain a frequency resolution below 1 mHz. The shift in resonant frequency can then be used to calculate the corresponding change in surface level, if the interaction between the fork and the helium surface is understood. One of the main goals of this thesis was to create interaction models that could provide quantitative estimates for the calculations. Experimental results suggest that the liquid-vapour surface forms a column of superfluid that is suspended from the tip of the fork. Due to the extreme wetting properties of superfluids, the fork is also coated with a thin (∼ 300 Å) layer of helium. The added mass from this layer depends on the fork-surface distance. Oscillations of the surface level thus cause periodic change in the effective mass of the fork, which in turn modulates the resonant frequency. For the solid-liquid interface the interaction is based on the inviscid flow of superfluid around the moving fork. The added hydrodynamic mass increases when the fork oscillates closer to the solid surface. Crystallization waves below the fork will thus change the fork's resonant frequency. We were able to excite gravity-capillary and crystallization waves in ^4He with a bifilarly wound capacitor. Using the quartz tuning fork detection scheme we measured the spectrum of both types of waves at 10 mK. According to the interaction models developed in this thesis, the surface level resolution of this method was ∼ 10 μm for the gravity-capillary waves and ∼ 1 nm for the crystallization waves. Thanks to the low dissipation (∼ 20 pW) of the measurement scheme, our method is directly applicable in future ^3He experiments.
  • Heikkilä, Mikko (2016)
    Probabilistic graphical models are a versatile tool for doing statistical inference with complex models. The main impediment for their use, especially with more elaborate models, is the heavy computational cost incurred. The development of approximations that enable the use of graphical models in various tasks while requiring less computational resources is therefore an important area of research. In this thesis, we test one such recently proposed family of approximations, called quasi-pseudolikelihood (QPL). Graphical models come in two main variants: directed models and undirected models, of which the latter are also called Markov networks or Markov random fields. Here we focus solely on the undirected case with continuous valued variables. The specific inference task the QPL approximations target is model structure learning, i.e. learning the model dependence structure from data. In the theoretical part of the thesis, we define the basic concepts that underpin the use of graphical models and derive the general QPL approximation. As a novel contribution, we show that one member of the QPL approximation family is not consistent in the general case: asymptotically, for this QPL version, there exists a case where the learned dependence structure does not converge to the true model structure. In the empirical part of the thesis, we test two members of the QPL family on simulated datasets. We generate datasets from Ising models and Sherrington-Kirkpatrick models and try to learn them using QPL approximations. As a reference method, we use the well-established Graphical lasso (Glasso). Based on our results, the tested QPL approximations work well with relatively sparse dependence structures, while the more densely connected models, especially with weaker interaction strengths, present challenges that call for further research.