Browsing by Title
Now showing items 2551-2570 of 4253
-
(2014)Tutkielman tavoitteena on kartoittaa normaalijakauman tämänhetkistä asemaa lukion matematiikassa. Normaalijakauma kuuluu osaksi lukion matematiikan tilastot ja todennäköisyys –kurssia sekä lyhyen että pitkän matematiikan puolella. Sen asema lukiomatematiikassa on perusteltu, sillä normaalijakauma on jo vuosikymmeniä ollut käytössä hyödyllisenä matemaattisena apuvälineenä erilaisissa käytännön tutkimuksissa ja ongelmissa. Jatkuvat muutokset ja supistukset opetussuunnitelmissa sekä vireillä olevat uudistukset ylioppilaskokeen suhteen tuovat kuitenkin muutoksia myös opetettavaan sisältöön. Tutkielmaa johdattelee eteenpäin hypoteesi normaalijakauman aseman heikkenemisestä lukiomatematiikassa. Tutkimuskysymystä lähestytään tutkielman edetessä useasta eri näkökulmasta pyrkien näin muodostamaan mahdollisimman monipuolinen ja kattava kuva aiheesta. Tutkielmassa käytetty aineisto koostui valmiista materiaaleista. Tietoa normaalijakauman vaiheista kerättiin opetussuunnitelmista eri vuosikymmeniltä sekä viimeisimpien vuosien ylioppilaskokeista. Ylioppilaskokeissa esiintyneitä normaalijakaumatehtäviä analysoitiin myös Dimensio-lehdessä julkaistujen kommenttien ja ratkaisuesimerkkien valossa. Näiden avulla pyrittiin selvittämään opiskelijoiden tasoa ja kiinnostusta normaalijakaumatehtävien suhteen. Lisäksi tutkielma sisältää oppikirja-analyysin kolmesta eri kirjasarjasta lyhyen ja pitkän matematiikan osalta. Tutkielmassa tulkittiin eri aineistoja selkeästi erikseen, mutta niiden väliltä pyrittiin löytämään myös yhtäläisyyksiä ja ristiriitoja. Normaalijakauma on joutunut jonkin verran väistymään uusien ja modernien matematiikan osa-alueiden tieltä. Muutos on tapahtunut pikkuhiljaa vuosien saatossa. Toisaalta normaalijakaumalla on edelleen vankka asema osana tilasto-oppia ja todennäköisyyslaskentaa, mutta siihen liittyvää teoriasisältöä on supistettu. Varsinkin ylioppilaskokeiden tehtävien perusteella voidaan todeta, että normaalijakaumatehtävät ovat selvästi kaavamaisempia ja vähemmän luovaa ajattelua vaativia kuin aiemmin. Vuonna 2012 ylioppilaskokeisiin tullut laskinuudistus on asettanut tehtävien laatijat uusien haasteiden eteen, kun aiemmat haastavatkin tehtävät ovat käyneet viisastuvan laskimen käyttäjälle perustehtäviksi. Tutkielmassa esitellään tehtäviä normaalijakaumasta, joiden avulla olisi mahdollista mitata opiskelijan aitoa matemaattista ymmärrystä pelkän laskimen käsittelytaidon sijaan.
-
(Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2009)Normalisoitu kompressioetäisyys NCD on mitta kahden dataobjektin välisen keskinäisen etäisyyden laskemiseen. Etäisyysmitta kuvaa sitä, kuinka paljon kahdessa vertailtavassa dataobjektissa on samankaltaisuutta. NCD on normalisoidun informaatioetäisyyden NID:n approksimointi. NID perustuu dataobjektien Kolmogorov-kompleksisuuteen. Dataobjektit kuvataan bittijonoina ja niissä on sitä enemmän samankaltaisuutta, mitä enemmän ne sisältävät keskinäisinformaatiota. NID on universaali siinä mielessä, että se poikkeaa korkeintaan vakiotermin verran optimaalisesta menetelmästä. Vakiotermi ei puolestaan riipu lainkaan vertailtavista dataobjekteista. NCD approksimoi NID:tä reaalimaailman tiivistäjillä, minkä vuoksi se on vain näennäisuniversaali, mutta siitä huolimatta käyttökelpoinen. NCD:n nojalla muodostetaan datasta etäisyysmatriisi, jonka avulla alkiot voidaan ryvästää ja havainnollistaa erityisen kvartettimenetelmän avulla puurakenteeseen. Menetelmää on sovellettu lupaavasti monella alalla. Tutkielma käy läpi menetelmän taustalla olevan teorian ja esittelee sen sovelluskohteita sekä paneutuu erityisesti stemmatologiseen Heinrichi-aineistoon, jota testataan CompLearn-ilmaisohjelmalla, joka tuottaa etäisyysmatriisin sekä muodostaa puurakenteen kvartettimenetelmällä.
-
(Helsingin yliopistoUniversity of HelsinkiHelsingfors universitet, 2000)Schrödingerin kissan paradoksi on puhuttanut paljon kvanttimekaniikan ymmärtämiseen pyrkiviä tutkijoita. Kissan kohtalo on ollut osasyynä mitä mielikuvituksellisempien ratkaisuyritysten keksimiseen ja kvanttimekaniikan tulkintojen syntymiseen. Kissaa itseäänkin on tulkittu paljon ja väärin, joten tutkimus on hyvä aloittaa tutkimuskohteen määrittelyllä – ensin siis selvitän, mikä on Schrödingerin kissa ja miksi se on nostettu pöydälle. Kolmea tunnetuinta kvanttimekaniikan tulkintatyyppiä (kööpenhaminalainen tulkinta, piilomuuttujateoriat ja multiuniversumiteoriat) vastaan voidaan esittää riittävän voimakasta kritiikkiä, minkä takia ratkaisua kissan paradoksiin on järkevää etsiä muualta. Työssäni tutkin ajatusta, jonka mukaan on mahdollista löytää 'tulkinta' kvanttimekaniikalle kvanttimekaniikan sisältä. Kissan kohtalo voidaan selvittää ratkaisemalla idealisoidun pistemäisen kissan (harmonisen oskillaattorin) tiheysmatriisin liikeyhtälö (mestariyhtälö) ympäristössä, jota mallinnetaan lämpökylvyllä. Havaitaan, että ympäristön vuorovaikutus kissan kanssa aiheuttaa dekoherenssiksi kutsutun ilmiön, mikä pienentää tiheysmatriisin koherenssitermejä (ei diagonaalilla olevia) eksponentiaalisesti ajan funktiona. Dekoherenssiaika (aika, jolloin koherenssitermit ovat pienentyneet e:nteen osaan) riippuu pääasiassa lämpötilasta ja ajasta. Työssäni kävin läpi dekoherenssiajan korkean ja matalan lämpötilan käyttäytymiset sekä pitkän että lyhyen aikaskaalan tapahtumille. Paljastui, että dekoherenssi on erittäin nopea ilmiö, minkä takia sen havaitseminen on todella vaikeaa – itse asiassa dekoherenssin tapahtuminen nähtiin koejärjestelyissä vasta 1990-luvulla. Tämän selvitystyön jälkeen on helppo ymmärtää, miksi kissa ei olekaan superpositiossa vaan klassisen kaltaisessa tilassa. Kissan paradoksi olikin vain näennäinen paradoksi, joka poistui ilmiön riittävän tarkalla ymmärtämisellä – kissaa ei voi täysin eristää ympäristöstään. Itse asiassa kissan atomien välinen vuorovaikutus riittäisi aikaansaamaan dekoherenssin. Dekoherenssi siis aiheuttaa vastaavuuden kvantti- ja klassisen fysiikan välille. Dekoherenssin paremmasta ja täsmällisemmästä ymmärtämisestä on hyötyä niin kvanttimekaaniselle perustutkimukselle, kosmologialle kuin sovelluspuolella mahdollisesti joskus rakennettavan kvanttitietokoneen valmistamisellekin. Jatkotutkimuksina olisi mielenkiintoista selvittää, kuinka dekoherenssiaika riippuu vuorovaikuttavien systeemien määrästä ja erilaisista vuorovaikutustyypeistä. Tässä tutkimuksessahan käytettiin yksinkertaisinta mahdollista kissan ja lämpökylvyn välistä kytkentää.
-
(2013)Ability to deduce three-dimensional structure of a protein from its one-dimensional amino acid chain is a long-standing challenge in structural biology. Accurate structure prediction has enormous application potential in e.g. drug development and design of novel enzymes. In past this problem has been studied experimentally (X-ray crystallography, nuclear magnetic resonance imaging) and computationally by simulating molecular dynamics of protein folding. However, the latter requires enormous computing resources and the former is expensive and time-consuming. Direct contact analysis (DCA) is an inference method relying on direct correlations measured from multiple sequence alignments (MSA) of protein families to predict contacts between amino acids in the three-dimensional structure of a protein. It solves the 21-state inverse Potts problem of statistical physics, i.e. given the correlations, what are the interactions between the amino acids of a protein. The current state of the art in the DCA approach is the plmDCA-algorithm relying on pseudolikelihood maximization. In this study the performance of the parallelised asymmetric plmDCA-algorithm is tested on a diverse set of more than 100 protein families. It is seen that generally for MSA's with more than approximately 2000 sequences plmDCA is able to predict more than half of the 100 top-scoring contacts correctly with the prediction accuracy increasing almost linearly as a function of the number of sequences. Parallelisation of plmDCA is also observed to make the algorithm tens of times (depending on the number of CPU cores used) faster than the previously described serial plmDCA. Extensions to Potts model taking into account the differences in distributions of gaps and amino acids in MSA's are investigated. An extension incorporating the position-dependant frequencies of gaps of length one to Potts model is found to increase the prediction accuracy for short sequences. Further and more extensive studies are however needed to discover the full potential of this approach.
-
(2018)The theme of the thesis has been the preparation of cellulose particles from homogeneous ionic liquid solutions. The work approaches this topic from the materials chemistry perspective and focuses on physical properties of the obtained materials and the methodological details of their preparation. Novel cellulose macro- and microparticles have been prepared from two cellulose solvents systems, [DBNH][OAc]/DMSO and [P4441][OAc]/GVL. The two types of particles and their preparation methods are very different and distinct. The large beads are prepared using remarkably simple hands-on method, while the small particles are prepared by exploiting a thermally triggered gelation of cellulose. Both particles are scientifically interesting and observations made during their preparation provide insight to the field of cellulose materials chemistry. Special attention has been given to exploring their potential applications outside the laboratory in the commercial and industrial sectors. The laboratory work examines the experimental parameters of the cellulose particle production methods in detail. The particle preparation methods are very simple in practice. Emphasis has been given to describing the observed novel phenomena and fine-tuning the processes for efficiency. The procured cellulose particles have been analysed using a variety of methods including optical microscopy, SEM, XRD, WAXS, CP-MAS, VSM and H-NMR.
-
(2022)The nature of dense matter is one of the greatest mysteries in high energy physics. For example, we do not know how QCD matter behaves in neutron star densities as there the matter is strongly coupled. Thus auxiliary methods have to be applied. One of these methods is the AdS/CFT-correspondence. This maps the strongly coupled field theory to weakly coupled gravity theory. The most well known example of this correspondence is the duality between N = 4 Super Yang-Mills and type IIB supergravity in AdS 5 × S 5 . This duality at finite temperature and chemical potential is the one we invoke in our study. It has been hypothesized that the dense matter would be in a color superconducting phase, where pairs of quarks form a condensate. This has natural interpretation in the gravity theory. The AdS 5 × S 5 geometry is sourced by stack of N coincident D3-branes. This N corresponds to the gauge group SU (N ) of N = 4 SYM. Then to study spontaneous breaking of this gauge group, one studies systems where D3-branes have separated from the stack. In this work we present two methods of studying the possibility of separating these branes from the stack. First we present an effective potential for a probe brane, which covers the dynamics of a single D3-brane in the bulk. We do this by using the action principle. Then we construct an effective potential for a shell constructed from multiple branes. We do this by using the Israel junction conditions. Single brane in the bulk corresponds to SU (N ) → SU (N − 1) × U (1) symmetry breaking and a shell of k-branes corresponds to SU (N ) → SU (N − k) × U (1) k symmetry breaking. Similar spontaneous breaking of the gauge group happens in QCD when we transition to a CSC-phase and hence these phases are called color superconducting. We find that for sufficiently high chemical potential the system is susceptible to single brane nucleation. The phase with higher breaking of the gauge group, which corresponds to having shell made out of branes in the bulk, is metastable. This implies that we were able to construct CSC-phases of N = 4 SYM, however, the exact details of the phase diagram structure is left for future research.
-
(2014)In this thesis, we concentrate on the problem of modelling real document collections, especially sequential document collections. The goal is to discover important hidden topics in the collection automatically by statistical modelling of its content. For the sequential document collections, we want to also capture how the topics change over time. To date, several computational tools such as latent dirichlet allocation (LDA) have been developed for modelling document collections. In this thesis, we develop new topic models for modelling the dynamic characteristics of a sequential document collection such as the news archives. We are, for example, interested in splitting the topics into long-term topics such as 'Eurozone crisis' that are discussed over years, and short-term topics such as 'Winter Olympics in 2014' that are only popular for several weeks. We first review the popular models for detecting the hidden topics and their evolution, and then propose two new approaches to detect these two kinds of topics. To provide real world data for the evaluation of our new approaches, we additionally design a pipeline for constructing sequential document collections through collecting documents from the Web. To investigate the performance of our new approaches from different aspects, we conduct qualitative and quantitative experiments on two different kinds of datasets respectively: news documents collected by the pipeline and 17 years' documents from the Neural Information Processing Systems (NIPS) conferences. The qualitative experiments aim at evaluating the quality of the discovered topics, whereas the quantitative experiments concern about their ability to predict new words from the unseen documents.
-
(2016)Ring opening polymerization and click reaction was used to synthesize thermo-responsive glyco-block copolymers consisting of a polyether block with pendant α-D-mannose groups and random copolymer blocks of poly(glycidyl methyl ether)-poly(epoxyhexane). The thermo-responsive block was synthesized as a random copolymer to decrease the phase transition temperature to usable region. Temperature-responsiveness would enable the polymers to switch between dissolved and aggregated states. Such glycopolymers would be interesting candidates for studying carbohydrate-lectin interactions and drug delivery properties. The synthesized polymers were analyzed using nuclear magnetic resonance and Fourier-transform infrared spectroscopy, turbidimetry and differential scanning calorimetry. Both glycopolymers and thermo-responsive copolymers were synthesized. The latter showed good control over the polymerization, leading to clickable azide functionality and desired ratios of monomers in the copolymers. Altering the ratios of glycidyl methyl ether and epoxyhexane in the feed led to variations in the cloud points and glass transition temperatures of the copolymers. The synthesis of glycopolymers proved difficult and could not be initiated using clickable propargyl alcohol. Also, no effective way to purify the glycopolymers initiated using bezyl alcohol was found. Combination of the glycopolymers and thermo-responsive copolymers was attempted using click reaction. A triazole signal was detected using nuclear magnetic resonance spectroscopy suggesting the reaction was successful. However, further studies are required to confirm this.
-
(2014)Solutions of thermoresponsive polymers exhibit a drastic and discontinuous change in their properties with temperature. A thermoresponsive polymer that is soluble at low temperatures but undergoes reversible phase transition in a solvent with rising temperature resulting in precipitation or cloud formation is said to exhibit Lower Critical Solution Temperature (LCST)-type behaviour. On the other hand, polymers that exhibit Upper Critical Solution Temperature (UCST)-type behaviour are soluble in water at temperatures above UCST and become reversible insoluble when temperature decreases below upper critical solution temperature. This work deals with the synthesis of novel upper critical solution temperature block copolymers and the effect of pH and electrolyte on their cloud point temperatures. The polymers poly(N-acryloylglycinamide) (PNAGA), poly(ethyleneoxide)-b-poly(N-acryloylglycinamide) (PEO-b-PNAGA), poly(N-isopropyl acrylamide)-b-poly(N-acryloylglycinamide) (PNIPAAm-b-PNAGA) and poly(ethyleneoxide)-b-poly(N-acryloylglycinamide)-b-poly(N-isopropyl acrylamide) (PEO-b-PNAGA-b-PNIPAAm) were synthesized by Reversible Addition-Fragmentation chain-Transfer polymerization in dimethyl sulphoxide. PEO-b-PNAGA and PEO-b-PNAGA-b-PNIPAAm exhibited UCST-type behaviour both in pure water (studied by NMR) and 0.1M NaCl solutions (studied by turbidimetry). Poly (ethyleneoxide) (PEO) block played an important role in enhancing the UCST behaviour of PNAGA by improving the polymers solubility. Yet, higher cloud points in 0.1M NaCl were observed than for PNAGA due to the presence of hydrophobic dodecyl end group. Measuring the particle size between 10-50 °C by dynamic light scattering proved that the polymers phase separated on cooling below the UCST. PEO-b-PNAGA-b-PNIPAAm showed multiresponsive behaviour both in pure water and electrolyte solution exhibiting both LCST and UCST. Change in pH had a dramatic effect on the UCST of PNAGA owing to the carboxylic acid end group shifting the cloud points to higher temperatures with increase in pH. The cloud points were lower for the PNAGA block copolymers in pH 4 buffer solutions compared to that of PNAGA itself due to high solubility of poly (ethylene oxide) block in aqueous solutions.
-
(2020)We apply the modern effective field theory framework to study the nucleation rate in high-temperature first-order phase transitions. With this framework, an effective description for the critical bubble can be constructed, and the exponentially large contributions to the nucleation rate can then be computed from the effective description. The results can be used to make more accurate predictions relating to cosmological first-order phase transitions, for example, the gravitational wave spectrum from a transition, which is important for the planned experiment LISA. We start by reviewing a nucleation rate calculation for a classical scalar field to understand, how the critical bubble arises, via a saddle-point approximation, as the central object of the nucleation rate calculation. We then focus on the statistical part of the nucleation rate coming from the Boltzmann suppression of nucleating bubbles. This is done by the creation of an effective field theory from a thermal field theory that can describe the critical bubble. We give an example calculation with the renormalizable model of two $\mathbb{Z}_2$-symmetric scalar fields. The critical bubbles of the model and their Boltzmann suppression are studied numerically, for which we further develop a recently proposed method.
-
(2022)Nukleiinihapot ovat luonnollisia yksi- tai kaksisäikeisiä polymeerejä, jotka koostuvat deoksiribo- tai ribonukleosideistä linkitettynä toisiinsa fosfodiesterisidoksella. Kun tällaisia ketjuja valmistetaan kemiallisin menetelmin fosfaattiryhmä olisi aktivoitava tietyllä tavalla ja funktionaaliset ryhmät, jotka eivät osallistu reaktioon, olisi suojattava väliaikaisesti tai pysyvästi. Kiinnostus nukleiinihappokemiaan johtuu synteettisten oligomeerien ja niiden analogien kasvavasta tarpeesta välttämättöminä tutkimusvälineinä molekyylibiologiassa ja lääketieteessä. Tutkielman kirjallisessa osassa esitettiin erilaisia menetelmiä lyhyiden oligonukleotidien kemialliselle synteesille. Fosfodiesterisidosten muodostuminen tapahtuu yleensä joko fosfotriesterin tai fosfiitti-triesterin välituotteiden avulla. P(III)-välituotteiden suuremman reaktiivisuuden vuoksi fosfiitti-triesterimenetelmä ja erityisesti fosforamidiittimenetelmä ovat herättäneet huomiota. Oligonukleotidisynteesin lähtöaineiksi on ehdotettu useita erilaisia nukleosidi-fosforamidiitteja, kun on etsitty tasapainoa stabiilisuuden ja reaktiivisuuden välillä. Tämän vuoksi H-fosfonaattimenetelmää, jossa yhdistetään sekä fosfotriesteri- että fosfiitti-triesterimenetelmien edut ja lisäksi fosfodiesterimenetelmän edut (esim. fosforikeskuksen suojaavan ryhmän puute), voidaan käyttää vaihtoehtona fosforamidiittimenetelmälle erityisesti RNA:n sekä hapoille labiilien oligonukleotidianalogien synteesissä. Kaikilla menetelmillä on kuitenkin hyvät puolet sekä huonot puolet, joten ei olisi vielä olemassa yleisesti sovellettavaa ja tehokasta synteesimenetelmää, vaan ne sopivat eri tapauksiin. Esimerkiksi suuren mittakaavan synteesin tapauksessa tulee ottaa huomioon reaktioaika, reagenssi, puhdistusmenetelmä ja muut resurssit. Lisäksi lähestymistavat ovat yleensä joko erittäin työläitä etenkin lopputuotteen puhdistuksessa tai monivaiheisia sekvenssejä, joiden kokonaistuotto on alhainen. Kokeellisessa osassa valmistettiin kolmenlaisina Brønsted happokatalyytteinä pentakarboksisyklopentadieenit (PCCP). Tutkittiin mahdollisuutta käyttää PCCP-johdonnaisia regioselektiiviseen nukleosidin 5’-hydroksyyliryhmän suojaamiseen asetaaliryhmällä. Menetelmällä onnistuttiin valmistamaan tyydyttävä määrä 5’-O-asetaalisuojattua tymidiiniä. Menetelmä vaikuttaa lupaavalta pienellä jatkokehityksellä.
-
(2013)Tässä työssä esitellään ja todistetaan nullstellensatz, eli Hilbertin nollajoukkolause. Todistuksessa oletetaan tunnetuksi kurssin 'Algebra I' asiat. Nullstellensatz on algebran peruslauseen moniulotteinen yleistys. Se antaa Hilbertin vastaavuudeksi kutsutun bijektiivisen vastaavuuden varistojen ja radikaalien ideaalien välille. Monet algebrallisen geometrian keskeiset tulokset perustuvat nullstellensatziin. Ennen nullstellensatzin todistamista, tässä työssä esitellään hieman ideaaleihin ja varistoihin liittyvää teoriaa. Lisäksi tässä työssä todistetaan Noetherin renkaisiin ja moduleihin liittyviä lauseita, joita tarvitaan nullstellensatzin todistamiseen. Lopussa todistetaan vielä nullstellensatzin seurauslauseita.
-
(2012)Vlasiator is a new massively parallel hybrid-Vlasov simulation code being developed at the Finnish Meteorological Institute with the purpose of building new global magnetospheric model going beyond magnetohydrodynamics (MHD). It solves Vlasov's equation for the ion distribution function in the full six-dimensional phase space and describes the electrons as a massless charge neutralising fluid using the MHD equations, thus including ion kinetic effects. The Vlasov equation solver is based on a second-order, three-dimensional finite volume wave-propagation algorithm, making use of Strang splitting to separate translation in space from acceleration in velocity space. The electromagnetic fields are obtained through a second-order, finite volume upwind constrained transport method which conserves the divergence of the magnetic by construction. This work presents the numerical and physical validation tests developed and/or run by the author for Vlasiator, without however covering the technical aspects pertaining to implementation or parallelisation. The numerical quality of the solvers is being assessed for their isotropy, their conservation of div B = 0 and their order of accuracy in space and time. The physical validation tests include an assessment of the diffusive properties of the Vlasov solver, a brief discussion of results obtained from the Riemann problem displaying kinetic effects in the shock solution and finally dispersion plots for quasiperpendicular and quasiparallel wave modes are presented and discussed. The conclusions are that Vlasiator performs well and in line with the expected characteristics of the methods implemented, provided the resolution is good enough. In space, ion kinetic scales should be resolved for kinetic effects going beyond an MHD description to emerge. In velocity space the resolution should yield a smooth discretisation of the ion distribution function, otherwise spurious non-physical artefacts can crop up in the results. The higher-order correction terms included in the solvers ensure good orders of accuracy even for discontinuous solutions, the conservation of div B = 0 is provided up to floating-point accuracy and dispersion plots match remarkably well analytic solutions.
-
(Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2007)The module of a quadrilateral is a positive real number which divides quadrilaterals into conformal equivalence classes. This is an introductory text to the module of a quadrilateral with some historical background and some numerical aspects. This work discusses the following topics: 1. Preliminaries 2. The module of a quadrilateral 3. The Schwarz-Christoffel Mapping 4. Symmetry properties of the module 5. Computational results 6. Other numerical methods Appendices include: Numerical evaluation of the elliptic integrals of the first kind. Matlab programs and scripts and possible topics for future research. Numerical results section covers additive quadrilaterals and the module of a quadrilateral under the movement of one of its vertex.
-
(2022)In this thesis, we demonstrate the use of machine learning in numerically solving both linear and non-linear parabolic partial differential equations. By using deep learning, rather than more traditional, established numerical methods (for example, Monte Carlo sampling) to calculate numeric solutions to such problems, we can tackle even very high dimensional problems, potentially overcoming the curse of dimensionality. This happens when the computational complexity of a problem grows exponentially with the number of dimensions. In Chapter 1, we describe the derivation of the computational problem needed to apply the deep learning method in the case of the linear Kolmogorov PDE. We start with an introduction to a few core concepts in Stochastic Analysis, particularly Stochastic Differential Equations, and define the Kolmogorov Backward Equation. We describe how the Feynman-Kac theorem means that the solution to the linear Kolmogorov PDE is a conditional expectation, and therefore how we can turn the numerical approximation of solving such a PDE into a minimisation. Chapter 2 discusses the key ideas behind the terminology deep learning; specifically, what a neural network is and how we can apply this to solve the minimisation problem from Chapter 1. We describe the key features of a neural network, the training process, and how parameters can be learned through a gradient descent based optimisation. We summarise the numerical method in Algorithm 1. In Chapter 3, we implement a neural network and train it to solve a 100-dimensional linear Black-Scholes PDE with underlying geometric Brownian motion, and similarly with correlated Brownian motion. We also illustrate an example with a non-linear auxiliary Itô process: the Stochastic Lorenz Equation. We additionally compute a solution to the geometric Brownian motion problem in 1 dimensions, and compare the accuracy of the solution found by the neural network and that found by two other numerical methods: Monte Carlo sampling and finite differences, as well as the solution found using the implicit formula for the solution. For 2-dimensions, the solution of the geometric Brownian motion problem is compared against a solution obtained by Monte Carlo sampling, which shows that the neural network approximation falls within the 99\% confidence interval of the Monte Carlo estimate. We also investigate the impact of the frequency of re-sampling training data and the batch size on the rate of convergence of the neural network. Chapter 4 describes the derivation of the equivalent minimisation problem for solving a Kolmogorov PDE with non-linear coefficients, where we discretise the PDE in time, and derive an approximate Feynman-Kac representation on each time step. Chapter 5 demonstrates the method on an example of a non-linear Black-Scholes PDE and a Hamilton-Jacobi-Bellman equation. The numerical examples are based on the code by Beck et al. in their papers "Solving the Kolmogorov PDE by means of deep learning" and "Deep splitting method for parabolic PDEs", and are written in the Julia programming language, with use of the Flux library for Machine Learning in Julia. The code used to implement the method can be found at https://github.com/julia-sand/pde_approx
-
(2014)Logaritmisk kapacitet är viktigt inom flera områden av tillämpad matematik och kan ha olika benämningar beroende på forskningsområdet. T.ex. inom talteorin kallas den logaritmiska kapaciteten för transfinit diameter och inom approximering av polynom är den känd som Chebyshevs konstant. Inom potentialteorin definieras den logaritmiska kapaciteten som måttet på storleken av en kompakt mängd i C. Men trots att den logaritmiska kapaciteten är så viktig inom många forskningsområden, så är den ytterst svår att beräkna. Tack vare dess samband till Greens funktioner går det att beräkna den logaritmiska kapaciteten analytiskt för vissa enklare mängder, såsom ellipser och kvadrater, men när det gäller mer komplicerade mängder så kan man endast uppskatta övre och nedre gränser. På grund av detta har det utvecklats flera numeriska metoder för detta syfte. I början av denna avhandling kommer vi att presentera nödvändig bakgrundsinformation för definiering och beräkning av logaritmisk kapacitet. I kapitel 4 presenterar vi definitionen av logaritmisk kapacitet och dess samband till Greens funktioner, samt hur man genom detta samband kan beräkna den logaritmiska kapaciteten analytiskt. Här presenterar vi även några gränser för den logaritmiska kapaciteten, samt definitionen för transfinit diameter och dess samband till den logaritmiska kapaciteten. I kapitel 5 kommer vi att presentera fyra olika numeriska metoder för approximering av logaritmisk kapacitet: Dijkstra-Hochstenbachs metod, Rostands metod, Ransford-Rostands metod, samt hur man kan använda Schwarz-Christoffel avbildningar för beräkning av logaritmisk kapacitet. Vi tillämpar även Rostands metod som ett MATLAB-program.
-
(Helsingin yliopistoUniversity of HelsinkiHelsingfors universitet, 1993)This study discusses the methods, algorithms and implementation techniques involved in the computational solution of unconstrained minimization problem : min x ∈ Rn f : Rn −! R Where Rn denotes the n-dimensional Euclidean space. The main goal in this study was to implement an easy-to-use software package running in personal computers for unconstrained minimization of multidimensional functions. This software package includes C language implementations of six minimization methods (listed below), an user-interface for entering each minimization problem, and an interface to a general software system called MathematicaTM which is used for plotting the problem function and the minimization route. The following minimization methods are discussed here : - Parabolic interpolation in one-dimension - Downhill simplex method in multidimensions - Direction set method in multidimensions - Variable metric method in multidimensions - Conjugate gradients method in multidimensions - Modified steepest descent method in multidimensions The first part of this study discusses the theoretical background of the minimization algorithms to be implemented in the software package. The second part introduces the overall design of the minimization software and in greater detail describes the individual software modules, which, as a whole, implement the software package. The third part introduces the techniques for testing the minimization algorithms, describes the set of test problems, and discusses the test results.
-
(2014)Magnetic reconnection is a process occurring in, e.g., space plasmas, that allows rapid changes of magnetic field topology and converts magnetic energy to thermal and non-thermal plasma energy. Especially solar flares are good examples of explosive magnetic energy release caused by magnetic reconnection, and it has been estimated that 50% of the total released energy is converted to the kinetic energy of charged particles. In spite of being such an important process in astrophysical phenomena, the theory and the mechanisms behind magnetic reconnection are still poorly understood. In this thesis, the acceleration of electrons in a two-and-half dimensional magnetic reconnection region with solar flare plasma conditions is studied using numerical modeling. The behavior of electrons are determined by calculating the trajectories of all particles inside a simulation box. The equations of motion are solved by using a particle mover called Boris method. The aim of this work is to better understand the acceleration of non-thermal electrons, and, for example, to explain how the inflow speed affects the final energy of the particles, what part of the reconnection area the most energetic electrons come from and how the scattering frequencies changes the energy spectra of the electrons. The focus of this thesis lies in numerical modeling, but all the relevant physics behind this subject are also briefly explained. First the basics of plasma physics are introduced, and leading models of magnetic reconnection are presented. Then the simulation setup and reasonable values for simulation parameters are defined and results of the simulations are discussed. Based on these, conclusions are drawn.
-
(2015)This work explores the lateral spreading of hot, thick, Paleoproterozoic crust via a series of 2D thermomechanical numerical models based on two geometrical a priori models of the thickened crust: plateau and plateau margin. High Paleoproterozoic radiogenic heat production is assumed. The material viscosity is temperature-dependent following the Arrhenius law. The experiments use two sets of rheological parameters for the crust: dry (granite/felsic granulite/mafic granulite) and wet (granite/diorite/mafic granulite). The results of the modeling are compared to seismic reflection sections and surface geological observations from the Paleoproterozoic Svecofennian orogen. Numerical modelling is performed with Ellipsis, a particle-in-cell finite element code suitable for 2D thermo-mechanical modelling of lithospheric deformation. It uses Lagrangian particles for tracking material interfaces and histories, which allow recording of material P-T-t paths. Plateau-models are based on a 480 km long section of 65 km-thick three-layer plateau crust. In the plateau margin-models, a transition from 65 km thick plateau to 40 km thick foreland is imposed in the middle of the model. The models are extended symmetrically from both ends with slow (1.9 mm/a) and fast (19 mm/a) velocities. Gravitational collapse is simulated with an additional set of fixed boundary plateau margin models. The models are studying the effect of free moving boundaries on the crustal structure and the conditions for mid-crustal flow. Strong mid-crustal channel flow is seen in plateau margin models with dry rheology and slow extension or with fixed boundaries. With fast extension or wet rheology channel flow grows weaker/diminishes. In models with slow extension or fixed boundaries, partial melting controls the style of deformation in the middle crust. Vertical movement of the partially molten material destroys lateral flow structures in plateau regions. According to P-T-t paths, the model materials do not experience high enough temperatures to match HT-LP metamorphic conditions typical for Svecofennian orogenic rocks. Metamorphic conditions in the dry rheology models have counterparts in the LT-LP (>650 °C at ≤600 MPa) amphibolite facies rocks of the Pielavesi area. Plateau margin models with dry rheology and slow extension or fixed boundaries developed mid-crustal channel flow, thinning of middle crust, exhumation of mid-crustal domes and smooth Moho, all of which are found in crustal scale reflection sections. Results of this work suggest plateau margin architecture prior to extension that took place at slow velocities or through purely gravitational collapse, although peak temperature of Svecofennian HT-LP metamorphism was not attained.
-
(2020)The Southern Andes is an important region to study strain partitioning behavior due to the variable nature of its subduction geometry and continental mechanical properties. Along the plate margin between the Nazca plate and the South American plate, the strain partitioning behavior varies from north to south, while the plate convergence vector shows little change. The study area, the LOFZ region, lies between 38⁰S to 46⁰S in the Southern Andes at around 100 km east of the trench. It has been characterized as an area bounded by margin-parallel strike-slip faults that creates a forearc sliver, the Chiloe block. It is also located on top of an active volcanic zone, the Southern Volcanic Zone (SVZ). This area is notably different from the Pampean flat-slab segment directly to the north of it (between latitude 28⁰ S and 33⁰ S), where volcanic activity is absent, and slip seems to be accommodated completely by oblique subduction. Seismicity in central LOFZ is spatially correlated with NE trending margin-oblique faults that are similar to the structure of SC-like kinematics described by Hippertt (1999). The margin-oblique faults and rhomb-shaped domains that accommodate strain have also been captured in analog experiments by Eisermann et al. (2018) and Eisermann relates the change in GPS velocity at the northern end of LOFZ to a decrease in crustal strength southward possibly caused by the change in dip angle. This project uses DOUAR (Braun et al. 2008), a numerical modelling software, to explore the formation of the complex fault system in the LOFZ in relation to strain partitioning in the Southern Andes. We implement the numerical versions of the analog models from Eisermann et al. (2018), called the MultiBox and NatureBox models to test the possibility to reproduce analog modelling results with numerical models. We also create simplified models of the LOFZ, the Natural System models, to compare the model displacement field with deformation pattern in the area. Our numerical model results in general replicate the findings from MultiBox experiment of Eisermann et al. (2018). We observe the formation of NW trending margin-oblique faulting in the central deformation zone, which creates rhombshaped blocks together with the margin-parallel faults. More strain is accommodated in the stronger part of the model, where the strain is more distributed across the area or prefers to settle on a few larger bounding faults, whereas in the weaker part of the model, the strain tends to localize on more smaller faults. The margin-oblique faults and rhomb-shaped domains accommodating strain is not present in the Natural System models with and without a strength difference along strike. This brings the question about the formation of the complex fault system in both the analog models and our numerical versions of them and hypothesis other than a strength gradient could be tested in the future.
Now showing items 2551-2570 of 4253