Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Enckell, Anastasia (2023)
    Numerical techniques have become powerful tools for studying quantum systems. Eventually, quantum computers may enable novel ways to perform numerical simulations and conquer problems that arise in classical simulations of highly entangled matter. Simple one dimensional systems of low entanglement are efficiently simulatable on a classical computer using tensor networks. This kind of toy simulations also give us the opportunity to study the methods of quantum simulations, such as different transformation techniques and optimization algorithms that could be beneficial for the near term quantum technologies. In this thesis, we study a theoretical framework for a fermionic quantum simulation and simulate the real-time evolution of particles governed by the Gross-Neveu model in one-dimension. To simulate the Gross-Neveu model classically, we use the Matrix Product State (MPS) method. Starting from the continuum case, we discretise the model by putting it on a lattice and encode the time evolution operator with the help of fermion-to-qubit transformations, Jordan-Wigner and Bravyi-Kitaev. The simulation results are visualised as plots of probability density. The results indicate the expected flavour and spatial symmetry of the system. The comparison of the two transformations show better performance of the Jordan-Wigner transformation before and after the gate reduction.
  • Hotari, Juho (2024)
    Quantum computing has an enormous potential in machine learning, where problems can quickly scale to be intractable for classical computation. Quantum machine learning is research area that combines the interplay of ideas from quantum computing and machine learning. Powerful and useful machine learning is dependent on having large-scale datasets used to train the models to be able to solve real-life problems. Currently, quantum machine learning lacks a plethora of large-scale quantum datasets required to further develop the models and test the quantum machine learning algorithms. Lack of large datasets is currently limiting the quantum advantage in the field of quantum machine learning. In this thesis, the concept of quantum data and different types of applied quantum datasets used to develop quantum machine learning models is studied. The research methodology is based on a systematic and comparative literature review of the state of the art articles in quantum computing and quantum machine learning in the recent years. We classify datasets into inherent and non-inherent quantum data based on the nature of the data. The preliminary literature review addresses patterns in the applied quantum machine learning. Testing and benchmarking QML models primarily uses non-inherent quantum data, or classical data encoded into the quantum system, while separate research is focused on generating inherent quantum datasets.
  • Haataja, Hanna (2016)
    In this thesis we introduce the Coleman-Weinberg mechanism through sample calculations. We calculate the effective potential in the massless scalar theory and massless quantum electrodynamics. After sample calculations, we walk through simple model in which the scalar particle, that breaks the scale invariance, resides at the hidden sector. Before we go into calculations we introduce basic concepts of the quantum field theory. In that context we discuss interaction of the fields and the Feynman rules for the Feynman diagrams. Afterwards we introduce the thermal field theory and calculate the effective potential in two cases, massive scalar theory and the Standard Model without fermions. We introduce the procedure how to calculate the effective potential, which contains ring diagram contributions. Motivation for this is knowledge of that sometimes the spontaneously broken symmetries are restored in the high temperature regime. If the phase transition between broken-symmetry and full-symmetry phase is first order phase transition baryogenesis can happen. Using the methods introduced in this thesis the Standard Model extensions that contain hidden sectors can be analyzed.
  • Hernandez Serrano, Ainhoa (2023)
    Using quantum algorithms to carry out ML tasks is what is known as Quantum Machine Learning (QML) and the methods developed within this field have the potential to outperform their classical counterparts in solving certain learning problems. The development of the field is partly dependent on that of a functional quantum random access memory (QRAM), called for by some of the algorithms devised. Such a device would store data in a superposition and could then be queried when algorithms require it, similarly to its classical counterpart, allowing for efficient data access. Taking an axiomatic approach to QRAM, this thesis provides the main considerations, assumptions and results regarding QRAM and yields a QRAM handbook and comprehensive introduction to the literature pertaining to it.
  • Lintulampi, Anssi (2023)
    Secure data transmissions are crucial part of modern cloud services and data infrastructures. Securing communication channel for data transmission is possible if communicating parties can securely exchange a secret key. Secret key is used in a symmetric encryption algorithm to encrypt digital data that is transmitted over an unprotected channel. Quantum key distribution is a method that communicating parties can use to securely share a secret cryptographic key with each other. Security of quantum key distribution requires that the communicating parties are able to ensure the authenticity and integrity of messages they exchange on the classical channel during the protocol. For this purpose they use cryptographic authentication techniques such as digital signatures or message authentication codes. Development of quantum computers affects how traditional authentication solutions can be used in the future. For example, traditional digital signature algorithms will become vulnerable if quantum computer is used to solve the underlying mathematical problems. Authentication solutions used in the quantum key distribution should be safe even against adversaries with a quantum computer to ensure the security of the protocol. This master’s thesis studies quantum safe authentication methods that could be used with quantum key distribution. Two different quantum safe authentication methods were implemented for quantum key distribution protocol BB84. The implemented authentication methods are compared based on their speed and size of the authenticated messages. Security aspects related to the authentication are also evaluated. Results show that both authentication methods are suitable to be used in quantum key distribution. Results also show that the implemented method that uses message authentication codes is faster than the method using digital signatures.
  • Veltheim, Otto (2022)
    The measurement of quantum states has been a widely studied problem ever since the discovery of quantum mechanics. In general, we can only measure a quantum state once as the measurement itself alters the state and, consequently, we lose information about the original state of the system in the process. Furthermore, this single measurement cannot uncover every detail about the system's state and thus, we get only a limited description of the system. However, there are physical processes, e.g., a quantum circuit, which can be expected to create the same state over and over again. This allows us to measure multiple identical copies of the same system in order to gain a fuller characterization of the state. This process of diagnosing a quantum state through measurements is known as quantum state tomography. However, even if we are able to create identical copies of the same system, it is often preferable to keep the number of needed copies as low as possible. In this thesis, we will propose a method of optimising the measurements in this regard. The full description of the state requires determining multiple different observables of the system. These observables can be measured from the same copy of the system only if they commute with each other. As the commutation relation is not transitive, it is often quite complicated to find the best way to match the observables with each other according to these commutation relations. This can be quite handily illustrated with graphs. Moreover, the best way to divide the observables into commuting sets can then be reduced to a well-known graph theoretical problem called graph colouring. Measuring the observables with acceptable accuracy also requires measuring each observable multiple times. This information can also be included in the graph colouring approach by using a generalisation called multicolouring. Our results show that this multicolouring approach can offer significant improvements in the number of needed copies when compared to some other known methods.
  • Järvinen, Matti (Helsingin yliopistoUniversity of HelsinkiHelsingfors universitet, 2004)
  • Haataja, Miika-Matias (2017)
    Interfaces of solid and liquid helium exhibit many physical phenomena. At very low temperatures the solid-liquid interface becomes mobile enough to allow a periodic melting-freezing wave to pro-pagate along the surface. These crystallization waves were experimentally confirmed in ^4He decades ago, but in ^3He they are only observable at extremely low temperatures (well below 0.5 mK). This presents a difficult technical challenge to create a measurement scheme with very low dissipation. We have developed a method to use a quartz tuning fork to probe oscillating helium surfaces. These mechanical oscillators are highly sensitive to interactions with the surrounding medium, which makes them extremely accurate sensors of many material properties. By tracking the fork's resonant frequency with two lock-in amplifiers, we have been able to attain a frequency resolution below 1 mHz. The shift in resonant frequency can then be used to calculate the corresponding change in surface level, if the interaction between the fork and the helium surface is understood. One of the main goals of this thesis was to create interaction models that could provide quantitative estimates for the calculations. Experimental results suggest that the liquid-vapour surface forms a column of superfluid that is suspended from the tip of the fork. Due to the extreme wetting properties of superfluids, the fork is also coated with a thin (∼ 300 Å) layer of helium. The added mass from this layer depends on the fork-surface distance. Oscillations of the surface level thus cause periodic change in the effective mass of the fork, which in turn modulates the resonant frequency. For the solid-liquid interface the interaction is based on the inviscid flow of superfluid around the moving fork. The added hydrodynamic mass increases when the fork oscillates closer to the solid surface. Crystallization waves below the fork will thus change the fork's resonant frequency. We were able to excite gravity-capillary and crystallization waves in ^4He with a bifilarly wound capacitor. Using the quartz tuning fork detection scheme we measured the spectrum of both types of waves at 10 mK. According to the interaction models developed in this thesis, the surface level resolution of this method was ∼ 10 μm for the gravity-capillary waves and ∼ 1 nm for the crystallization waves. Thanks to the low dissipation (∼ 20 pW) of the measurement scheme, our method is directly applicable in future ^3He experiments.
  • Heikkilä, Mikko (2016)
    Probabilistic graphical models are a versatile tool for doing statistical inference with complex models. The main impediment for their use, especially with more elaborate models, is the heavy computational cost incurred. The development of approximations that enable the use of graphical models in various tasks while requiring less computational resources is therefore an important area of research. In this thesis, we test one such recently proposed family of approximations, called quasi-pseudolikelihood (QPL). Graphical models come in two main variants: directed models and undirected models, of which the latter are also called Markov networks or Markov random fields. Here we focus solely on the undirected case with continuous valued variables. The specific inference task the QPL approximations target is model structure learning, i.e. learning the model dependence structure from data. In the theoretical part of the thesis, we define the basic concepts that underpin the use of graphical models and derive the general QPL approximation. As a novel contribution, we show that one member of the QPL approximation family is not consistent in the general case: asymptotically, for this QPL version, there exists a case where the learned dependence structure does not converge to the true model structure. In the empirical part of the thesis, we test two members of the QPL family on simulated datasets. We generate datasets from Ising models and Sherrington-Kirkpatrick models and try to learn them using QPL approximations. As a reference method, we use the well-established Graphical lasso (Glasso). Based on our results, the tested QPL approximations work well with relatively sparse dependence structures, while the more densely connected models, especially with weaker interaction strengths, present challenges that call for further research.
  • Suominen, Heikki (2022)
    Quantum computers are one of the most prominent emerging technologies of the 21st century. While several practical implementations of the qubit—the elemental unit of information in quantum computers—exist, the family of superconducting qubits remains one of the most promising platforms for scaled-up quantum computers. Lately, as the limiting factor of non-error-corrected quantum computers has began to shift from the number of qubits to gate fidelity, efficient control and readout parameter optimization has become a field of significant scientific interest. Since these procedures are multibranched and difficult to automate, a great deal of effort has gone into developing associated software, and even technologies such as machine learning are making an appearance in modern programs. In this thesis, we offer an extensive theoretical backround on superconducting transmon qubits, starting from the classical models of electronic circuits, and moving towards circuit quantum electrodynamics. We consider how the qubit is controlled, how its state is read out, and how the information contained in it can become corrupted by noise. We review theoretical models for characteristic parameters such as decoherence times, and see how control pulse parameters such as amplitude and rise time affect gate fidelity. We also discuss the procedure for experimentally obtaining characteristic qubit parameters, and the optimized randomized benchmarking for immediate tune-up (ORBIT) protocol for control pulse optimization, both in theory and alongside novel experimental results. The experiments are carried out with refactored characterization software and novel ORBIT software, using the premises and resources of the Quantum Computing and Devices (QCD) group at Aalto University. The refactoring project, together with the software used for the ORBIT protocol, aims to provide the QCD group with efficient and streamlined methods for finding characteristic qubit parameters and high-fidelity control pulses. In the last parts of the thesis, we evaluate the success and shortcomings of the introduced projects, and discuss future perspectives for the software.
  • Salminen, Reeta-Maaret Emilia (2013)
    In this study polymeric fluorescence quenchers were studied. The focus was on the quenching efficiency assessed with Stern-Volmer -plotting. Poly(4-vinylpyridine) and poly(nitrostyrene), poly(allylamine) and two other polymers were used as quenchers. Measurements with other than poly(nitrostyrene) were conducted in DMF. The measurements in aqueous solutions were conducted with different pH and with water and methanol as solvents for the pyrene. Using methanol as the solvent for pyrene made possible variation of pyrene concentration. Poly(4-vinylpyridine) was found to be an excellent quencher of fluorescence in aqueous solutions at pH 3.5, as was also poly(nitrostyrene) in DMF solutions. The Stern-Volmer -plot showed linear dependency of intensity ratio to quencher concentration, whereas the other polymeric quenchers tested showed downwards curvature implying that perhaps the polymer conformation prevents the fluorophore quencher interactions. Also the quenching of fluorescence was found to be independent of pH.
  • Halme, Topi (2021)
    In a quickest detection problem, the objective is to detect abrupt changes in a stochastic sequence as quickly as possible, while limiting rate of false alarms. The development of algorithms that after each observation decide to either stop and declare a change as having happened, or to continue the monitoring process has been an active line of research in mathematical statistics. The algorithms seek to optimally balance the inherent trade-off between the average detection delay in declaring a change and the likelihood of declaring a change prematurely. Change-point detection methods have applications in numerous domains, including monitoring the environment or the radio spectrum, target detection, financial markets, and others. Classical quickest detection theory focuses settings where only a single data stream is observed. In modern day applications facilitated by development of sensing technology, one may be tasked with monitoring multiple streams of data for changes simultaneously. Wireless sensor networks or mobile phones are examples of technology where devices can sense their local environment and transmit data in a sequential manner to some common fusion center (FC) or cloud for inference. When performing quickest detection tasks on multiple data streams in parallel, classical tools of quickest detection theory focusing on false alarm probability control may become insufficient. Instead, controlling the false discovery rate (FDR) has recently been proposed as a more useful and scalable error criterion. The FDR is the expected proportion of false discoveries (false alarms) among all discoveries. In this thesis, novel methods and theory related to quickest detection in multiple parallel data streams are presented. The methods aim to minimize detection delay while controlling the FDR. In addition, scenarios where not all of the devices communicating with the FC can remain operational and transmitting to the FC at all times are considered. The FC must choose which subset of data streams it wants to receive observations from at a given time instant. Intelligently choosing which devices to turn on and off may extend the devices’ battery life, which can be important in real-life applications, while affecting the detection performance only slightly. The performance of the proposed methods is demonstrated in numerical simulations to be superior to existing approaches. Additionally, the topic of multiple hypothesis testing in spatial domains is briefly addressed. In a multiple hypothesis testing problem, one tests multiple null hypotheses at once while trying to control a suitable error criterion, such as the FDR. In a spatial multiple hypothesis problem each tested hypothesis corresponds to e.g. a geographical location, and the non-null hypotheses may appear in spatially localized clusters. It is demonstrated that implementing a Bayesian approach that accounts for the spatial dependency between the hypotheses can greatly improve testing accuracy.
  • Nikula, Petter (2016)
    This thesis investigates the automated near real time science analysis performed at the INTEGRAL Science Data Centre. The structure of the Quick-Look Analysis pipeline and individual analysis stages are detailed. The stage performing pattern recognition for two-dimensional coordinate lists, i.e. source identification, is tested in-depth. The lists contain sources located in a randomly selected 9ÌŠ by 9ÌŠ area of the sky. Using the current live version and default parameters; a simulated new source was correctly identified 98% of the time, fields with no new sources produced false detections 8% of the time. The testing reveals two separate flaws; a code error and a methodological error. The sensitivity of recognizing that a new source has been detected is reduced because of the code error. The methodological error causes the algorithm to report the detection of previously unknown sources where none exists. A possible solution is presented. New source detection was improved to well above 99% and false detections reduced below 2% with the new solution. A second methodological error causes the algorithm used to correct for the pointing error of the instrument to produce unreliable results. Fortuitously this problem is serious only for small pointing errors where the source matching algorithm is able to compensate for it.
  • Helle, Joose (2020)
    It is likely that journey-time exposure to pollutants limit the positive health effects of active transport modes (e.g. walking and cycling). One of the pollutants caused by vehicular traffic is traffic noise, which is likely to cause various negative health effects such as increased stress levels and blood pressure. In prior studies, individuals’ exposure to community noise has usually been assessed only with respect to home location, as required by national and international policies. However, these static exposure assessments most likely ignore a substantial share of individuals’ total daily noise exposure that occurs while they are on the move. Hence, new methods are needed for both assessing and reducing journey-time exposure to traffic noise as well as to other pollutants. In this study, I developed a multifunctional routing application for 1) finding shortest paths, 2) assessing dynamic exposure to noise on the paths and 3) finding alternative, quieter paths for walking. The application uses street network data from OpenStreetMap and modeled traffic noise data of typical daytime traffic noise levels. The underlying least cost path (LCP) analysis employs a custom-designed environmental impedance function for noise and a set of (various) noise sensitivity coefficients. I defined a set of indices for quantifying and comparing dynamic (i.e. journey-time) exposure to high noise levels. I applied the developed routing application in a case study of pedestrians’ dynamic exposure to noise on commuting related walks in Helsinki. The walks were projected by carrying out an extensive public transport itinerary planning on census based commuting flow data. In addition, I assessed achievable reductions in exposure to traffic noise by taking quieter paths with statistical means by a subset of 18446 commuting related walks (OD pairs). The results show significant spatial variation in average dynamic noise exposure between neighborhoods but also significant achievable reductions in noise exposure by quieter paths; depending on the situation, quieter paths provide 12–57 % mean reduction in exposure to noise levels higher than 65 dB and 1.6–9.6 dB mean reduction in mean dB (compared to the shortest paths). At least three factors seem to affect the achievable reduction in noise exposure on alternative paths: 1) exposure to noise on the shortest path, 2) length of the shortest path and 3) length of the quiet path compared to the shortest path. I have published the quiet path routing application as a web-based quiet path routing API (application programming interface) and developed an accompanying quiet path route planner as a mobile-friendly web map application. The online quiet path route planner demonstrates the applicability of the quiet path routing method in real-life situations and can thus help pedestrians to choose quieter paths. Since the quiet path routing API is open, anyone can query short and quiet paths equipped with attributes on journey-time exposure to noise. All methods and source codes developed in the study are openly available via GitHub. Individuals’ and urban planners’ awareness of dynamic exposure to noise and other pollutants should be further increased with advanced exposure assessments and routing applications. Web-based exposure-aware route planner applications have the potential to help individuals to choose alternative, healthier paths. When developing exposure-based routing analysis further, attempts should be made to enable simultaneously considering multiple environmental exposures in order to find overall healthier paths.
  • Timperi, Kalle (2014)
    Tutkielmassa tarkastellaan satunnaisia Fourier-sarjoja ja niiden ominaisuuksia. Työ jakautuu kahteen osaan, joista ensimmäisessä tarkastellaan niin sanottuja Rademacher-kertoimisia Fourier-sarjoja, ja toisessa Brownin liikkeen konstruktiota satunnaisena Fourier-sarjana. Rademacher-kertoimisessa sarjassa annettujen determinististen Fourier-kerrointen (c_n) eteen lisätään satunnainen etumerkki, eli satunnaiskerroin ε, jolle P(ε = 1) = P(ε = −1) = 1/2. Tarkasteltavat Fourier-sarjat on määritelty välillä [−π, π], joten Rademacher-sarja voidaan tulkita tällä välillä määritellyksi satunnaiseksi funktioksi, mikäli sarja suppenee melkein kaikkialla. Tällöin voidaan kysyä, millä todennäköisyydellä tällä funktiolla on jokin ominaisuus, kuten jatkuvuus tai integroituvuus. Osoittautuu, että Rademacher-sarjan suppeneminen riippuu siitä, päteekö alkuperäisille kertoimille ehto (c_n)_{n=-∞}^∞ ∈ \ell^2. Osoitamme, että mikäli tämä ehto on voimassa, suppenee sarja melkein varmasti melkein kaikkialla ja lisäksi L^2 -mielessä, jolloin se määrittelee funktion F ∈ L^2 (−π, π). Melkein varman suppenemisen osoittamiseen on ainakin kaksi tietä, joista toinen nojaa martingaalien teoriaan. Käsittelemme molempia tapoja, ja esittelemme tutkielman alkupuolella tarvittavat martingaaliteorian tulokset. Näytämme tämän jälkeen, että sarjan supetessa L^2 -mielessä pätee itse asiassa vahvempi ominaisuus e^λ|F|^2 ∈ L^1(−π, π) kaikilla λ ∈ [0, ∞). Tästä seuraa, että itse asiassa F kuuluu kaikkiin L^p -avaruuksiin arvoilla p ∈ [0, ∞). Tästä herää kysymys, päteekö tulos myös arvolle p = ∞. Konstruoimmekin osion lopuksi lakunaaristen Fourier-sarjojen avulla esimerkkejä funktioista F , joille yllä kuvatussa tilanteessa F ∈ L^p(−π, π) kaikilla p ∈ [0, ∞), mutta kuitenkin F \notin L^∞(−π, π). Tarkastelemme tämän jälkeen tapausta (c_n)_{n=−∞}^∞ \notin \ell^2. Tällöin Rademacher-sarja melkein varmasti hajaantuu ja oskilloi melkein joka pisteessä x ∈ [−π, π] ja sarja melkein varmasti ei esitä mitään välin [−π, π] mittaa. Osoitamme kuitenkin, että mikäli kertoimet c_n kasvavat enintään polynomista vauhtia, on aina olemassa välin [−π, π] periodinen distribuutio, jonka Fourier kertoimet muodostavat jonon (c_n)_{n=−∞}^∞. Tutkielman loppuosassa johdamme Brownin liikkeelle esityksen satunnaisena Fourier-sarjana. Käytämme tässä apuna Karhunen-Lòeve –Teoreemaa, joka antaa yleisen menetelmän satunnaisprosessin esittämiseksi satunnaisena sarjana. Todistamme aluksi Karhunen-Lòeve –Teoreeman ja tämän jälkeen johdamme Brownin liikeen KL-sarjakehitelmän, joka osoittautuu sini-sarjaksi, jossa kertoimet ovat normaalijakautuneita, riippumattomia satunnaismuuttujia.
  • Matero, Ilkka Seppo Olavi (2014)
    In this thesis I study the radiation balance and heat budget of a multiyear sea ice floe drifting in the central Arctic ocean. The objectives of the study were to quantify the vertical partitioning of shortwave- and longwave radiation and to quantify the different components of the heat budget of the floe in question, both inside and at its interfaces. The measurements were set up at 88 26.6N, 176 59.88W on 8th of August and carried out for ten days. The measurements were made as a part of the fourth Chinese National Arctic Expedition CHINARE2010. The measurement setup consisted of a net radiometer, four PAR-sensors, a pyrano-albedometer, three spectral radiometers, daily snow pit measurements, weather observations and six ice corings. With the data from these studies I was able to quantify the rate of melting and fluxes of heat both at the surface and at the bottom of the ice. The data allowed for examining the fraction of transmitted and conducted heat but were insufficient for properly quantifying the internal changes and spectral composition of the shortwave radiation at different depths. The surface was observed to be losing heat mainly in the longwave part of the spectrum. The average net radiation on top of the ice on wavelengths between 200 nanometers and 100 micrometers over the period was -25.0 Watts per square meter. The heat fluxes of the shortwave and longwave radiations were of opposite directions and the negative heat flux of the longwave radiation dominated until a distinct change in the radiative conditions on 17th of August. For the remainder of the period these heat fluxes nearly balanced each other and the average net radiation was -2.1 Watts per square meter. The latent and sensible heat fluxes were observed to have a minor role in the surface heat budget with averages of -1.5 Watts per square meter and -0.03 Watts per square meter respectively. The ice was observed to melt primarily at the bottom at a rate of 0.5 cm per day driven by the input of heat from the underlying ocean. Melting at the surface was not apparent until before the last two days of studies, when the upper layer of the snow cover melted. The changes in sea ice and snow cover were visually observed to exhibit significant spatial variability even on a single floe.
  • Tuomola, Anneka (Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2008)
    I de senaste två decennierna har radikalcyklisering, intramolekylär radikaladdition, utvecklats till en viktig syntesmetod för polycykliska indoler och pyrroler. De erhållna produktmolekylerna eller deras derivat är ofta naturliga eller syntetiska alkaloider som väckt biologiskt eller medicinskt intresse. Avhandlingen behandlar både intramolekylära radikaladditioner av pyrrolyl-, indolyl- eller indolylacylradikaler till π-bindningar och intramolekylära additioner av flera olika radikaler till indolens eller pyrrolens π-system. Också cykliseringar som delreaktioner i radikalkaskader behandlas. Radikalreaktioner kan släckas både oxidativt och reduktivt. För att bibehålla heteroarenens aromaticitet måste cykliseringar till pyrrol- eller indolringen släckas oxidativt. Oxidativa radikaladditioner till aromater benämns homolytiska aromatiska substitutioner. Det finns olika sätt att erhålla en reaktantradikal från en radikalprekursor. I vissa fall har radikalprekursorn en mycket labil bindning som kan brytas fotokemiskt eller med hjälp av en initiator. Till exempel kol-svavelbindningen av en O-etyl-S-alkylxanthat kan brytas på detta sätt. Ofta används dock en radikalmediator för att bilda en reaktantradikal från dess prekursor. Mediatorer är ofta föreningar, som under reaktionsförhållandena själv bildar radikaler med en stor affinitet för en prekursors specifika atom eller atomgrupp vilken abstraheras. Således fungerar mediatorn som mellanhand vid bildning av reaktantradikalen. Exempel på mediatorer av detta slag som använts vid syntes av polycykliska pyrroler och indoler är tributyltennhydrid, hexabutylditenn, tris(trimetylsilyl)silan, tributylgermaniumhydrid, dicumylperoxid, trietylboran, natriumarensulfinater (med ättiksyra) och Se-fenyl-p-toluenselenosulfonat. Också dimetylsulfoxid kan ses som mediator då den bildar metylradikaler vid Fentonreaktion i lösningsmedlet. Övergångsmetallsalter kan även bilda reaktantradikalen från prekursorn genom enelektronoxidationer eller -reduktioner. Vid syntes av polycykliska pyrroler och indoler har reaktantradikalen bildats genom enelektronoxidationer med Mn(OAc)3 eller Ag2+ (Miniscireaktion) och elektronreduktioner med ett Ni(I)-komplex eller SmI2. Avhandlingen är indelad enligt reagenset eller reagensen, som åstadkommer bildning av reaktantradikalen vid syntes av polycykliska indoler och pyrroler. Cirka hälften av avhandlingen behandlar tributyltennhydridmedierade cykliseringar då reagenset trots dess toxicitet är det överlägset mest använda. Avhandlingen diskuterar mekanismen för bildning av reaktantradikalen från prekursorn, cykliseringen och dess möjliga regioselektivitet, andra radikalreaktioner vid radikalkaskader och hur produktradikalen släcks.
  • Benke, Petra (2021)
    Active galactic nuclei (AGN) are one of the most powerful sources of the luminous Universe. Radio-loud AGN exhibit prominent relativistic outflows known as jets, whose synchrotron radiation can be detected in the radio domain. The launching, evolution and variable nature of these sources is still not fully understood. We study 3C 84, because its proximity, brightness and the intermittent nature of its jet makes it a good target to investigate these open questions of the AGN phenomena. 3C 84 (optical counterpart: NGC 1275) is a Fanaroff-Riley type I radio galaxy, located in the Perseus cluster at z = 0.0176. Due to its close proximity, 3C 84 has been a favourable target for observations throughout the entire electromagnetic spectrum, especially for ones in the radio domain. Its most recent activity started 2003, when a new component emerged from the core in the form of a restarted parsec-scale jet. This provided a rare opportunity to study the formation and evolution of a jet (see Nagai et al. 2010, 2014, 2017 and Suzuki et al. 2012). The highest resolution results were obtained by Giovannini et al. (2018), who imaged the source with the Global VLBI Network together with the Space Radio Telescope, RadioAstron. This enabled them to capture the limb-brightened structure of the restarted jet and measure its collimation profile from ~350 gravitational radii. In this work I present the 22 GHz RadioAstron observations carried out 3 years later, in a similar configuration, but with a significantly different sampling of the space baselines than the ones presented in Giovannini et al. (2018). The calibration was carried out in the Astronomical Image Processing System (AIPS), whereas imaging was done in Difmap (Shepherd 1997). The aim of this thesis work was to obtain a high-resolution image of the source, measure the collimation profile of the restarted jet, and compare the results with those of Giovannini et al. (2018) and verify the observed source structures and measured jet properties, if possible. Comparing the images of the two epochs (angular resolution of the 2016 observations is 0.217x0.072 mas at Pa=-49.6°), they both show a similar structure, with the radio core, a diffuse emission region (C2), and the hotspot (C3) at the end of the restarted jet. Edge-brightening is confirmed in the jet and the counter-jet. However, the jet has advanced ~1 mas, corresponding to the velocity of 0.55c. C3 has moved from the center of the feature to the jet head, indicating an interaction between the jet and the clumpy external medium (Kino et al. , 2018 and Nagai et al., 2017). The base of the jet has also changed between the observation, approximately by ~20°. In the light that in the 1990s the jet pointed towards C2, then swinged westwards when the jet emerged (Suzuki et al., 2012 and Giovannini et al., 2018), and on the 2016 image has moved towards its initial position. This suggest a precessing jet, observed and modeled by Dominik et al. (2021) and Britzen et al. (2019). Measuring the brightness temperature of the core and the hotspot shows a signifacant drop of 70% and 50% since the 2013 measurements, respectively, due to emission of jet material and the expansion of the jet. Jet width measurements between 1200 and 19000 gravitational radii reveal a less cylindrical collimation profile, with r ~ z0.31 – where z is the de-projected distance from the core and r is the width of the jet. The evolution of the restarted jet’s profile from quasi-cylindrical (Giovannini et al. 2018) to less cylindrical implies that the cocoon surrounding the jet (Savolainen, 2018) cannot confine the jet material as it moves further from the core. The measured collimation profile corresponds to a slowly decreasing density, and more steeply decreasing pressure gradient in the external medium. Since the closest jet width measurement is only at 1200 gravitational radii from the core (here the jet width is 750 gravitational radii), it cannot confirm the wide jet base measured by Giovannini et al. (2018) at 350 gravitational radii. Based on this result, we arrive at the same conclusion as Giovannini et al. (2018), that the jet is either launched from the accretion disk, or it is ergosphere-launched, but undergoes a quick lateral expansion below 1000 gravitational radii.
  • Lempinen, Janne (2012)
    Osa uraanisarjaan kuuluvan Ra-226:n hajotessa muodostuvasta Rn-222:sta tihkuu maasta ilmakehään. Rn-222:n hajotessa edelleen muodostuu radiolyijyä (Pb-210), joka päätyy laskeumassa maahan ja vesistöihin. Vesistöissä Pb-210 sitoutuu sedimentoituviin partikkeleihin ja päätyy sedimenttiin. Sedimentin radiolyijy koostuu tukeutuneesta sedimentin Ra-226:n hajotessa syntyvästä radiolyijystä sekä tukeutumattomasta radiolyijystä, joka on peräisin laskeumasta. Tukeutumatonta radiolyijyä voidaan käyttää sedimenttien ajoitukseen aina noin 150 vuoden päähän. Radiolyijyajoituksessa käytetään malleja, joissa oletetaan vakioksi sedimentoitumisnopeus ja radiolyijyn vuo sedimenttiin, tukeutumattoman radiolyijyn aktiivisuus sedimentin pinnalla tai pelkästään radiolyijyvuo. Radiolyijy on määritetty perinteisesti alfaspektrometrisesti tyttärentyttärensä Po-210:n kautta. Tämä menetelmä vaatii työlään radiokemiallisen erottelun. Nykyisin myös gammaspektrometriaa voidaan hyödyntää radiolyijyn määritykseen suoraan sedimenttinäytteestä, mutta radiolyijyn alhaisella gammaenergialla gammasäteilyn itseabsorptio näytteeseen voi olla merkittävää ja vaihdella näytteen alkuainekoostumuksen mukaan. Radiolyijyn gammaenergia on alhainen ja intensiteetti pieni, mikä vaikeuttaa gammaspektrometrista määritystä. Lisäksi tunnetaan useita menetelmiä, joissa radiolyijy määritetään tyttärensä Bi-210:n tai sen omien beetahiukkasten kautta. Myös näiden menetelmien radiokemialliset erottelut ovat työläitä. Pro gradu -työn kokeellisessa osassa määritettiin radiolyijy Umbozeron ja Pitkälammen sedimenteistä alfa- ja gammaspektrometrisesti ja tehtiin sedimenteille radiolyijyajoitus. Gammaspektrometriassa tutkittiin itseabsorptiota Cutshallin menetelmällä. Työn tavoitteena oli kehittää radiolyijyn määrittämiseksi uusi menetelmä, joka olisi vähemmän työläs kuin alfaspektrometria mutta herkempi kuin gammaspektrometria. Uudessa menetelmässä hyödynnetään 3 M Emporen valmistamaa kiinteäfaasiuuttosuodatinta Strontium Diskiä, joka uuttaa strontiumin laimeasta typpihappoliuoksesta kvantitatiivisesti. Radiolyijyn havaittiin pidättyvän Strontium Diskiin. Radiolyijy mitattiin Strontium Diskeistä nestetuikelaskennalla. Strontium Diskiä hyödyntävässä uudessa menetelmässä saatiin useimmilla näytteillä pienemmät radiolyijypitoisuudet kuin alfaspektrometrisesti määritettynä, mikä voi johtua vaimenemisesta tai siitä, ettei radiolyijy pidättynyt Strontium Diskiin kvantitatiivisesti. Menetelmä vaatii vielä kehittelyä, mutta sen etuna on nestetuikelaskennan korkea efektiiviisyys radiolyijylle ja alfaspektrometristä määritystä pienempi työmäärä. Alfa- ja gammaspektrometrian tulokset vastasivat hyvin toisiaan. Itseabsorptio ei ollut gammaspektrometriassa merkittävää työssä mitatuilla enintään noin 5 gramman sedimenttinäytteillä. Umbozeron sedimentille saatiin alfaspektrometrisesti ja uudella menetelmällä tuloksia, jotka vastasivat aiemmin julkaistuja. Pitkälammen sedimentti oli sekoittunut, eikä sitä voitu ajoittaa.
  • Tikkanen, Otto (2020)
    The distribution coefficients of radium in Olkiluoto potassium rich biotite were obtained by batch sorption experiments carried out as a function of the concentration of radium and barium. The batch sorption experiments were carried out with four different Olkiluoto associated reference groundwater types: fresh mildly reducing granitic reference groundwater ALLMR (or modified Allard granitic water), glacial anoxic meltwater OLGA, carbonate containing reducing brackish reference groundwater OLBA, and saline reducing reference groundwater OLSR. The main focus of the experiments was to evaluate the effect of the water salinity on the sorption of radium on biotite. The results were compared with sorption results obtained in previous studies done on radium and radium’s physicochemical analogue barium. According to the sorption results, the distribution coefficients of radium on biotite were largest in lower salinity waters. As the barium concentrations of the sorption solutions were increased, the distribution coefficients of radium generally stayed level until they decreased noticeably in the higher studied Ba concentrations (10E-3 and 10E-4 mol/l). This suggests that radium is a poor competitor in the ion exchange adsorption interactions on the surface of biotite, when other cations are present in the solution. As a unique case amongst the more predictable reference groundwaters, the brackish OLBA behaved differently. Despite the increasing salinity of the sorption solutions, the apparent sorption of radium increased steadily throughout the barium isotherm. It was concluded that the high concentration of sulphate in the OLBA reference groundwater caused the occurrence of coprecipitation of radium with the added barium and sulphate as (Ba,Ra)SO4.