Browsing by Title
Now showing items 2631-2650 of 4261
-
(2023)''Don't put all your eggs in one basket'' is a common saying that applies particularly well to investing. Thus, the concept of portfolio diversification exists and is generally accepted to be a good principle. But is it always and in every situation preferable to diversify one's investments? This Master's thesis explores this question in a restricted mathematical setting. In particular, we will examine the profit-and-loss distribution of a portfolio of investments using such probability distributions that produce extreme values more frequently than some other probability distributions. The theoretical restriction we place for this thesis is that the random variables modelling the profits and losses of individual investments are assumed to be independent and identically distributed. The results of this Master's thesis are originally from Rustam Ibragimov's article Portfolio Diversification and Value at Risk Under Thick-Tailedness (2009). The main results concern two particular cases. The first main result concerns probability distributions which produce extreme values only moderately often. In the first case, we see that the accepted wisdom of portfolio diversification is proven to make sense. The second main result concerns probability distributions which can be considered to produce extreme values extremely often. In the second case, we see that the accepted wisdom of portfolio diversification is proven to increase the overall risk of the portfolio, and therefore it is preferable to not diversify one's investments in this extreme case. In this Master's thesis we will first formally introduce and define heavy-tailed probability distributions as these probability distributions that produce extreme values much more frequently than some other probability distributions. Second, we will introduce and define particular important classes of probability distributions, most of which are heavy-tailed. Third, we will give a definition of portfolio diversification by utilizing a mathematical theory that concerns how to classify how far apart or close the components of a vector are from each other. Finally, we will use all the introduced concepts and theory to answer the question is portfolio diversification always preferable. The answer is that there are extreme situations where portfolio diversification is not preferable.
-
(2019)In this thesis we cover some fundamental topics in mathematical finance and construct market models for the option pricing. An option on an asset is a contract giving the owner the right, but not the obligation, to trade the underlying asset for a fixed price at a future date. Our main goal is to find a price for an option that will not allow the existence of an arbitrage, that is, a way to make a riskless profit. We will see that the hedging has an essential role in this pricing. Both the hedging and the pricing are very import tasks for an investor trading at constantly growing derivative markets. We begin our mission by assuming that the time parameter is a discrete variable. The advantage of this approach is that we are able to jump into financial concepts with just a small quantity of prerequisites. The proper understanding of these concepts in discrete time is crucial before moving to continuous-time market models, that is, models in which the time parameter is a continuous variable. This may seem like a minor transition, but it has a significant impact on the complexity of the mathematical theory. In discrete time, we review how the existence of an equivalent martingale measure characterizes market models. If such measure exists, then market model does not contain arbitrages and the price of an option is determined by this measure via the conditional expectation. Furthermore, if the measure also unique, then all the European options (ones that can be exercised only at a predetermined time) are hedgeable in the model, that is, we can replicate the payoffs of those options with strategies constructed from other assets without adding or withdrawing capital after initial investments. In this case the market model is called complete. We also study how the hedging can be done in incomplete market models, particularly how to build risk-minimizing strategies. After that, we derive some useful tools to the problems of finding optimal exercise and hedging strategies for American options (ones that can be exercised at any moment before a fixed time) and introduce the Cox-Ross-Rubinstein binomial model to use it as a testbed for the methods we have developed so far. In continuous time, we begin by constructing stochastic integrals with respect to the Brownian motion, which is a stochastic component in our models. We then study important properties of stochastic integrals extensively. These help us comprehend dynamics of asset prices and portfolio values. In the end, we apply the tools we have developed to deal with the Black-Scholes model. Particularly, we use the Itô’s lemma and the Girsanov’s theorem to derive the Black-Scholes partial differential equation and further we exploit the Feynman-Kac formula to get the celebrated Black-Scholes formula.
-
(2024)Machine learning operations (MLOps) is an intersection paradigm between machine learning (ML), software engineering, and data engineering. It focuses on the development and operations of software engineering by providing principles, components, and workflows that form the MLOps operational support system (OSS) platform. The increasing use of ML with increasing data size and model complexity has created a challenge where the MLOps OSS platforms require cloud and high-performance computing environments to achieve flexible and efficient scalability for different workflows. Unfortunately, there are not many open-source solutions that are user-friendly or viable enough to be utilized by an MLOps OSS platform, which is why this thesis proposes a bridge solution utilized by a pipeline to address the problem. We used Design Science Methodology to define the problem, set objectives, design the implementation, demonstrate the implementation, and evaluate the solution. The resulting solutions are an environment bridge called the HTC-HPC bridge and a pipeline called the Cloud-HPC pipeline that uses it. We defined a general model for Cloud-HPC MLOps pipelines to implement the used functions in a use case suitable infrastructure ecosystem and MLOps OSS platform using open-source, provided, and self-implemented software. The demonstration and evaluation showed that the HTC-HPC bridge and Cloud-HPC pipeline provide easy setup, utilized, customizable, and scalable workflow automation, which can be used for typical ML research workflows. However, it also showed that the bridge needed improved multi-tenancy design and that the pipeline required templates for a better user experience. These aspects, alongside testing use case potential and finding real-world use cases, are part of future work.
-
(2013)Tutkimuksen tarkoituksena oli tutkia, miksi matematiikkaa pidetään poikien aineena sekä pohtia keinoja, joiden avulla tyttöjä voisi rohkaista luottamaan omaan osaamiseensa. Tyttöjen ja poikien matematiikan osaamisessa ei peruskoulussa ole juurikaan eroa, mutta poikien asenteet matematiikkaa kohtaan ovat myönteisemmät kuin tyttöjen. Pojat luottavat enemmän omaan osaamiseensa ja ovat rohkeampia soveltamaan tietojaan. Tutkimuksessa kartoitetaan yhdeksäsluokkalaisten matematiikan opiskeluun liittyviä käsityksiä ja etsitään poikien ja tyttöjen välisiä eroja. Tutkimukseen osallistui 32 hämeenlinnalaisen yläkoulun oppilasta, joista 19 oli poikia ja 13 tyttöjä. Tutkimus toteutettiin matematiikan tunnin aikana toukokuussa 2013. Tutkimusaineisto kerättiin kyselylomakkeella, joka sisälsi avoimia kysymyksiä. Kysymyksenasettelulla pyrittiin siihen, että vastaukset olisivat yksiselitteisiä ja että niihin olisi helppo vastata kirjallisesti. Tutkimusaineisto analysoitiin sekä määrällisin että laadullisin menetelmin. Tutkimuksessa suurin osa oppilaista arvioi, että tytöt ja pojat osaavat matematiikkaa yhtä hyvin. Pojat vastasivat pitävänsä matematiikasta huomattavasti useammin kuin tytöt ja tyttöjen oman matematiikan osaamisen vähättely ja vähäisempi luottamus omiin matemaattisiin taitoihin tuli esille useissa vastauksissa. Onnistumisen kokemukset ovat merkityksellisiä matematiikan opiskelussa ja niiden avulla voidaan parantaa motivaatiota sekä saada parempia tuloksia. Onnistumista koetaan esimerkiksi hyvän koenumeron myötä ja silloin, kun osataan neuvoa kaveria jonkin tehtävän ratkaisemisessa tai kaveri auttaa vaikean tehtävän kanssa.
-
(Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2005)The aim of this study is to find out how urban segregation is connected to the differentiation in educational outcomes in public schools. The connection between urban structure and educational outcomes is studied on both the primary and secondary school level. The secondary purpose of this study is to find out whether the free school choice policy introduced in the mid-1990's has an effect on the educational outcomes in secondary schools or on the observed relationship between the urban structure and educational outcomes. The study is quantitative in nature, and the most important method used is statistical regression analysis. The educational outcome data ranging the years from 1999 to 2002 has been provided by the Finnish National Board of Education, and the data containing variables describing the social and physical structure of Helsinki has been provided by Statistics Finland and City of Helsinki Urban Facts. The central observation is that there is a clear connection between urban segregation and differences in educational outcomes in public schools. With variables describing urban structure, it is possible to statistically explain up to 70 % of the variation in educational outcomes in the primary schools and 60 % of the variation in educational outcomes in the secondary schools. The most significant variables in relation to low educational outcomes in Helsinki are abundance of public housing, low educational status of the adult population and high numbers of immigrants in the school's catchment area. The regression model has been constructed using these variables. The lower coefficient of determination in the educational outcomes of secondary schools is mostly due to the effects of secondary school choice. Studying the public school market revealed that students selecting a secondary school outside their local catchment area cause an increase in the variation of the educational outcomes between secondary schools. When the number of students selecting a school outside their local catchment area is taken into account in the regressional model, it is possible to explain up to 80 % of the variation in educational outcomes in the secondary schools in Helsinki.
-
(2013)Partially ordered sets (posets) have various applications in computer science ranging from database systems to distributed computing. Content-based routing in publish/subscribe systems is a major poset use case. Content-based routing requires efficient poset online algorithms, including efficient insertion and deletion algorithms. We study the query and total complexities of online operations on posets and poset-like data structures. The main data structures considered are the incidence matrix, Siena poset, ChainMerge, and poset-derived forest. The contributions of this thesis are twofold: First, we present an online adaptation of the ChainMerge data structure as well as several novel poset-derived forest variants. We study the effectiveness of a first-fit-equivalent ChainMerge online insertion algorithm and show that it performs close to optimal query-wise while requiring less CPU processing in a benchmark setting. Second, we present the results of an empirical performance evaluation. In the evaluation we compare the data structures in terms of query complexity and total complexity. The results indicate ChainMerge as the best structure overall. The incidence matrix, although simple, excels in some benchmarks. Poset-derived forest is very fast overall if a 'true' poset data structure is not a requirement. Placing elements in smaller poset-derived forests and then merging them is an efficient way to construct poset-derived forests. Lazy evaluation for poset-derived forests shows some promise as well.
-
(2017)Verkkoyhteisöllisyys ja välitetty viestintä ovat muodostuneet tärkeiksi työkaluiksi diasporille kuulumisen tunteen ja identiteetin neuvotteluvälineiksi, sekä yhteydenpitovälineeksi. Aiempi tutkimus on identifioinut erityisiä siteitä joita diasporisilla populaatioilla on niin alkuperäiseen kotimaahansa sekä uuteen maahan. Nämä muodostavat olennaisen osan keskusteluista diasporien verkkoyhteistöissä. Tämän tutkimuksen kohteena on Suomen somalidiaspora, joiden verkkoyhteisöllisyyttä on tutkittu vähän. Transnationaalisuuden ja diasporisuuden teorioissa sekä internet-yhteisöjen verkostoituneessa luonteessa on havaittu vahvojen tilallisen elementtien piirteitä, mitkä ovat olennaisia näiden toiminnalle. Tämän johdosta tutkimus tukeutuu tilallisuuden teoriaan metodologisen lähestymistavan kehityksessä. Tavoitteena on tutkia kuinka Suomen somalidiaspora luo, järjestyy ja ylläpitää verkkotiloja. Näille verkkoyhteisöille tyypilliset haasteet sekä usein käsitellyt aiheet ovat myös tärkeä tutkimuksen kohde. Analyysiin käytetään myös kahta muuta viitekehystä, mitkä keskittyvät transnationaalisuuden teoriaan ja diasporian positiohon. Tutkimus perustuu Suomen somalidiasporan piirissä suoritettuihin puolistrukturoituihin haastatteluihin. Haastattelut on kerätty käyttämällä niin kutsuttua lumipallo-menetelmää. Tutkimusaineisto koostuu 16 haastattelusta. Tutkimuksen päälöydös osoittaa Suomen somalidiasporan jäsenien ylläpitävän verkossa monipuolisia sitoumuksia sekä käyttävän sosiaalista media hyvin vastaavalla tavalla kuin muut ryhmät. Joitakin diasporisia erityispiirteitä voidaan kuitenkin eritellä yhteisöistä. Ensimmäisenä näistä nousee esiin kielen käyttö niin hallinnan välineenä sekä mahdollistajana. Myös kulttuurillinen toisintaminen sekä hybridisaatio nousevat merkittävään rooliin ryhmien keskusteluissa. Transnationaalisuus esiintyy ryhmissä useiden erilaisten aktiviteettien kautta, mitkä välittyvät haastateltavien sosiaalisen median käytön kuvauksista. Tutkimuksen tulokset kritisoivat oletusarvoista lähestymistapaa diasporisten erityisyyksien tutkimukseen ja tapaustutkimuksiin nojaavaa lähestymistapaa diasporien nopeasti muuttuvan verkkokulttuurin tutkimuksessa. Avoimet metodologiat kuten tämän tutkimuksen spatiaalinen lähestymistapa esitetään parempana tapana luoda edustavampi kuva näistä monipuolisista verkkokulttuureista.
-
(2016)Capillary electrophoresis is a great option for analyzing metabolomics compounds since the analytes are often charged. The technique is simple and cost-efficient but it is not the most popular equipment because it lacks high concentration sensitivity. Therefore, on-line concentration techniques have been developed for capillary electrophoresis. The aim of this thesis is to give an introduction to the most common on-line concentration methods in capillary electrophoresis, and to demonstrate a novel on-line concentration technique termed electroextraction. Until now, the research of on-line concentration techniques in capillary electrophoresis is mainly focused on methods based on field amplification, transient isotachophoresis, titration incorporated methods or sweeping, which are presented in the literature section. In a two-phase electroextraction, the electrodes are placed in an aqueous acceptor phase and in an organic donor phase, in which the analytes are dissolved. When the voltage is applied, the conductivity difference in the two phases cause high local field strength on organic phase leading to fast migration of the cationic analytes towards the cathode. As soon as the analytes cross the solvent interface, their migration speed decrease and they are concentrated at the phase boundary. In these experiments, a normal capillary electrophoresis analyzer was used with a hanging aqueous phase droplet at the tip of the capillary inlet. The experimental part was carried out at Leiden University, Division of Analytical BioSciences in the Netherlands. An electroextraction-capillary electrophoresis system was built for the analysis of biological acylcarnitine compounds. After the method parameters were assessed with ultraviolet detection, the method was coupled with mass spectrometric detection, and the selectivity and repeatability were briefly tested. Sensitivity was enhanced with the electroextraction procedure but the extraction factors were not satisfactory yet. Selectivity of electroextraction was discovered when the extraction of acylcarnitines was performed using different solvents. All parameters affecting the electroextraction procedure were not tested, and therefore the instability of the method was not completely understood. Thus, the method should be further investigated and optimized. In fact, all on-line concentration methods ought to be optimized for the target analytes in their existing matrix.
-
(2013)Rapid detection of bioactive compounds plays an important role in the phytochemical investigation of natural plant extracts.The hyphenated techniques, which couple on-line chromatographic separation and biochemical detection, are called high-resolution screening methods. In this system, high-performance liquid chromatography separates complex mixtures, and a post-column biochemical assay determines the activity of the individual compounds present in the mixtures. At the same time, parallel chemical detection techniques (e.g., diode-array detection, mass spectrometry, and nuclear magnetic resonance) identify and quantify the active compounds. In recent years, bioassays for radical scavenging (antioxidant) activity and immunoassays for antibodies have particularly been developed and applied. Assays for enzymes and receptors are limited. In the literature section of this thesis the development of on-line, post-column biochemical detection systems for the screening of bioactive compounds from complex mixtures was investigated. The interaction of drugs with proteins has gained significant importance in various areas of analytical chemistry. It can also be expected that more drugs will certainly be discovered with the development of biotechnology in the future. In the experimental section of this thesis comprehensive two-dimensional gas chromatography- time-of-flight mass spectrometry was used for screening for chemical composition of birch bark (Betula pendula). The exploitation of the mass spectra and retention index information allowed the identification of more than 600 organic compounds. Altogether, 59 phenolic compounds were identified in the inner layer of birch bark. To the best of our knowledge some of these compounds (e.g., raspberry ketone and tyrosol) have not been reported as extractives of Betula species before. The results achieved by gas chromatography mass spectrometry showed that several phenols with biological activity were present at relatively high concentrations in the sample. It was noticed that content of the compounds were dependent on the solvents with different polarities and volatilities. Phenols were extracted from birch bark using an environmental friendly pressurized hot water extraction technique. It provided good extraction efficiencies for phenolic compounds compared to those achieved with Soxhlet extraction. With pressurized hot water extraction the amount of extractable phenolic compounds approached for up to 23% of the dry weight whereas the amount was 2-5% (w/w) by Soxhlet extraction. Typical extraction time varied from 20 to 40 minutes. Most of the phenolic compounds were extracted at 180 °C for 40 minutes. Increase in the extraction temperature from 150 to 180°C resulted in an increase in the number of phenols extracted. However, enhanced temperature can accelerate hydrolysis and oxidation, so that unwanted or thermo-labile compounds can compose. Pressurized hot water extraction using water as a solvent proved to be a very promising extraction technique, and it surely has a great potential for the extraction of phenols from birch bark in the future.
-
(2018)Monolithic architecture has been the standard way to architect applications for years. Monolithic applications use a single codebase which makes the deploying and development easier without adding any additional complexity as long as the size of the application stays relatively small. When the size of the codebase grows the architecture might deteriorate. This slows down the development and making it harder to on-board new developers. Microservice architecture is a novel architec- ture style that tries to solve these issues in larger codebases. Microservice architecture consists of multiple small autonomous services that are deployed and developed separately. Microservice architecture enables more fine-grained scaling and makes it possible to have faster development cycles by decreasing the amount of regression testing that is needed, because each of the services can be deployed and updated separately from each other. Microservice architecture provides also multiple new challenges that have to be solved in order to get the benefit from them. These challenges are such as the handling of distributed transactions, communication between microservices, separation of concerns in microservices and so on. On top of the technical challenges there are also organizational and operational challenges. The operational challenges are such as monitoring, logging and automated deployment of microservices. This thesis studies the differences between monolithic and microservice architecture and pinpoints the main challenges on the transition from monolithic architecture to microservice architecture. A proof of concept on how to transform a single bounded context from monolith to microservices will be made to get a better understanding of the challenges. Also a plan how to migrate tangled bounded contexts from monolith to microservices will be made in order to fully support the transition process in the future. The results from the proof of concept and the plan that was made show that the cohesion and loose coupling is more likely to stay when the bounded context is transformed to microservice.
-
(2020)Decision-making is an important part of all of software development. This is especially true in the context of software architecture. Software architecture can even be thought of as a set of architectural decisions. Decision-making therefore plays a large part in influencing the architecture of a system. This thesis studies architecturally significant decision-making in the context of a software development project. This thesis presents the results of a case study where the primary source of data was interviews. The case is a single decision made in the middle of a subcontracted project. It involves the development team and several stakeholders from the client, including architects. The decision was handled quickly by the development team when an acute need for a decision arose. The work relating to the decision-making was mostly done within the agile development process used by the development team. Only the final approval from the client was done outside the development process. This final approval was given after the decision was already decided in practise and an implementation based on it was built. This illustrates how difficult it is to incorporate outside decision-making into software development. The decision-making also had a division of labour where one person did the researching and preparing of the the decision. This constituted most of the work done relating to the decision. This type of division of labour may perhaps be generalized further into other decision-making elsewhere within software development generally.
-
(2018)Abelian categories provide an abstract generalization of the category of modules over a unitary ring. An embedding theorem by Mitchell shows that one can, whenever an abelian category is sufficiently small, find a unitary ring such that the given category may be embedded in the category of left modules over this ring. An interesting consequence of this theorem is that one can use it to generalize all diagrammatic lemmas (where the conditions and claims can be formulated by exactness and commutativity) true for all module categories to all abelian categories.\\ The goal of this paper is to prove the embedding theorem, and then derive some of its corollaries. We start from the very basics by defining categories and their properties, and then we start constructing the theory of abelian categories. After that, we prove several results concerning functors, "homomorphisms" of categories, such as the Yoneda lemma. Finally, we introduce the concept of a Grothendieck category, the properties of which will be used to prove the main theorem. The final chapter contains the tools in generalizing diagrammatic results, a weaker but more general version of the embedding theorem, and a way to assign topological spaces to abelian categories. The reader is assumed to know nothing more than what abelian groups and unitary rings are, except for the final theorem in the proof of which basic homotopy theory is applied.
-
(2016)Pro gradu -tutkielman lähtökohta on Helsingin, Espoon, Vantaan ja Pääkaupunkiseudun Kierrätyskeskus Oy:n 4V – Välitä, Vaikuta, Viihdy, Voi hyvin -hanke, jonka puitteissa järjestettiin lapsille ja nuorille sarjakuvatyöpajoja vuosina 2008 ja 2009. Tutkimuksen aineiston muodostavat sarjakuvatyöpajoissa tuotetut piirrokset aiheesta onnellinen kaupunki. Tutkimuksen tavoitteena on muodostaa kokonaiskuva lasten onnellisesta kaupungista sarjakuva-aineiston pohjalta ja pohtia visuaaliseen aineiston roolia tutkimuksessa. Tutkimuksen keskiössä ovat lapsi, lapsen tuottama aineisto ja lasta koskeva tutkimus sekä visuaaliseen aineiston havainnointiin liittyvät tekijät. Tutkimus koostuu kahdesta laajemmasta kokonaisuudesta: onnellisen kaupungin representaatiosta ja tutkimusprosessin haasteista. Onnellisen kaupungin representaatiota lähestytään maantieteellisten ulottuvuuksien ja paikkojen kautta. Visuaalisen tutkimuksen haasteita eritellään tutkimusprosessin eri osat kattaen. Pro gradu -tutkielman tutkimusote on aineistolähtöinen ja menetelmäpainotteinen. Tutkimuksen ytimen muodostavat laaja sarjakuva-aineisto ja sen analysointiin käytetty sisällönanalyysi. Sisällönanalyysissä yhdistyvät kvalitativiiset ja kvantitatiiviset menetelmät; piirroksissa esiintyvien elementtien havannointi ja kuvailu matriisin ja elementtien suhteellisten osuuksien laskeminen. Tutkimuksessa korostuvat lasten rooli aineiston tuottajana sekä tulkinnalliset ja menetelmälliset haasteet, jotka ovat vaikuttaneet saatuihin tuloksiin. Tutkimuksen perusteella lapsille annettu ohjeistus sarjakuvatyöpajoissa vaikuttaa heidän piirtämäänsä kuvaan onnellisesta kaupungista tuottamalla dualistisen ja arjen paikkoja korostavan kuvan kaupungista. Sarjakuva aineistotyyppinä puolestaan painottaa toimintaa ja henkilöhahmoja rakennetun ja luonnonympäristön sijaan. Nämä taustatekijät huomioon ottaen lasten onnellinen kaupunki on sosiaalinen kaupunki, jossa fyysisen ympäristön rooli on tarjota toimivat puitteet lasten arjen toiminnoille. Fyysinen kaupunki näyttäytyy lapselle mahdollisuutena leikkiin ja harrastuksiin, mutta rakennetun ympäristön rakenteet ja arkkitehtuuri eivät korostu. Onnellinen kaupunki saavutetaan usein kaupunkilaisten yhteisten ponnistelun tuloksena kestävän kehityksen teemojen värittämänä. Lasten onnellinen kaupunki on yhdistelmä erilaisten reaali- ja mielikuvitusmaailmasta poimittujen ympäristöjen piirteitä, mistä kertovat piirroksissa toistuvat satuhahmot ja -paikat.
-
(Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2011)The aim of this masters thesis was to examine subjective wellbeing and personal happiness. Empirical study of happiness is part of broader wellbeing research and is based on an idea that the best experts of personal wellbeing are the individuals themselves. In addition to perceptions of personal happiness, the aim was also to acquire knowledge about personal values and components personal happiness is based on. In this study, moving into certain community and the characteristics of neigbourhood contributing happiness, were defined to represent these values. The object was, through comparative case-study, to obtain knowledge about subjective wellbeing of the individuals in two different residential areas inside metropolitan area of Helsinki. In comparative case study the intention usually is that the examined units represent spesific 'cases' from something broader and therefore the results can be somehow generalized. Consequently the chosen cases in this study were selected due to their image of 'urban village' and thus the juxtapositioning was constructed between secluded post-suburban village and more heterogeneous urban village better attached to existing urban structure. The research questions were formed as follows: Are there any differences between the areas regarding the components personal happiness is based on? Are there any differences between the areas regarding the level of residents subjective wellbeing? Based on the residents assessments, what are the most important characteristics of neighbourhood contributing personal happiness? The data used in order to gain answers to these questions was obtained from internet-based survey questionnaire. Based on the data residents of post-suburban village Sundsberg seem to share highly family oriented set of values and actualizing these values is ensured with high income, wealth and secure work situation. Instead in Kumpula the components of happiness seem place more towards learning and personal development, interesting leisure and hobbies and specially having an influence regarding communal decisions. Concerning subjective wellbeing of residents there can be seen some differences as well. Personal life is experienced a bit more happier in Sundsberg than in Kumpula. People are more satisfied with their personal health and job satisfaction in Sundsberg and additionally feelings of loneliness, inadequancy and frustration are bit more common in Kumpula. Regarding the characteristics of neighbourhood contributing happiness data suggests that key characteristics of area are peacefulness and safely, good location and connections and proximity of parks and recreational areas. These characteristics were considered highly significant in both areas but they were experienced to actualize better in Kumpula. In addition to these components the residents in Kumpula were overall more satisfied with various characteristics contributing happiness in their residential area. Besides these attributes mentioned above residents in Kumpula emphasize also some 'softer' elements connecting into social, functional and communal side of area. From Sundsberg point of view residential area best contributing happiness is child friendly and safe community based on likeminded people who share the same socio-economical situation. The results of this study can be linked back into the society and metropolitan area, which they were chosen from as a cases to be studied. The results can thereby be seen as an example of differentation of conditions of personal happiness between certain population segments. It is possible to detect an spatial dimension to this process as well and thereby the results suggests that regional segmentation affects between high-ranking residential areas as well. Thereby the results of this research contributes to the debate on innovative, diverse and dynamic urban area and as well cohesion of metropolitan area and the society in whole.
-
(2016)In this master's thesis we explore the mathematical model of classical Lagrangian mechanics with constraints. The main focus is on the nonholonomic case which is obtained by letting the constraint distribution to be nonintegrable. Motivation for the study arises from various physical examples, such as a rolling rigid body or a snakeboard. In Chapter 2, we introduce the model and derive the associated equations of motion in several different forms while using the Lagrangian variational principle as a basis for the kinematics. We also show how nonintegrability of the constraint distribution is linked to some external forces via the Frobenius theorem. Symmetric mechanical systems are discussed in Chapter 3. We define the concept for a Lagrangian system with constraints and show how any free and proper Lie group action induces an intrinsic vertical structure to the tangent bundle of the configuration manifold. The associated bundle is used to define the nonholonomic momentum which is a constrained version of the form that appears in the modern formulation of the classical Noether's theorem. One applies the classical Noether's theorem to a symmetric system with integrable constraints by restricting observation to an integral submanifold. This procedure, however, is not always possible. In nonholonomic mechanics, a Lie group symmetry implies only an additional equation of motion rather than actual conservation law. In Chapter 4, we introduce a coordinate free technique to split the Lagrangian variational principle in two equations, based on the Lie group invariance. The equations are intrinsic, that is to say, independent of the choice of connections, related parallel transports and covariant differentiation. The vertical projection, associated to the symmetry, may be varied to alter the representation and shift balance between the two equations. In Chapter 5, the results are applied to the rattleback which is a Lagrangian model for a rigid, convex object that rolls without sliding on a plane. We calculate the nonholonomic momentum and state the equations of motion for a pair of simple connections. One of the equation is also solved with respect to a given solution for the other one. The thesis is mainly based on the articles 'Nonholonomic Mechanical Systems with Symmetry' (A.M. Bloch, P.S. Krishnaprasad, J.E. Marsden, and R M. Murray, 1996), 'Lagrangian reduction by stages' (H. Cendra, J.E. Marsden, and T.S. Ratiu, 2001), 'Geometric Mechanics, Lagrangian Reduction and Nonholonomic Systems' (H. Cendra, J.E. Marsden, and T.S. Ratiu, 2001) and the book 'Nonholonomic mechanics and control' (A.M. Bloch, 2003).
-
(2012)In this Thesis I present the general theory of semigroups of linear operators. From the philosophical point of view I begin by connecting deterministic evolution in time to dynamic laws that are stated in terms of a differential equation. This leads us to associate semigroups with the models for autonomic deterministic motion. From the historical point of view I reflect upon the history of the exponential function and its generalizations. I emphasize their role as solutions to certain linear differential equations that characterize both exponential functions and semigroups. This connection then invites us to consider semigroups as generalizations of the exponential function. I believe this angle of approach provides us with motivation as well as useful ideas. From the mathematical point of view I construct the basic elements of the theory. First I consider briefly uniformly and strongly continuous semigroups. After that I move on to the more general σ(X, F)-continuous case. Here F is a so called norming subspace of the dual X^*. I prove the existence of both the infinitesimal generator S of the semigroup and the resolvent (λ - S)^(-1) as well as some of their basic properties. Then I turn to the other direction and show how to create a semigroup starting from its generator. That is the content of the famous Hille—Yosida Theorem. From the practical point of view I give some useful characterizations of the generator in terms of dissipativity and accretivity. These techniques also lead us to an effortless proof of Stone's Theorem on unitary groups. Finally, from an illustrational point of view I give two applications. The first is about multiplicative semigroups on L^p spaces, where the setting is simple enough to allow intuition to accompany us. The second takes on a problem of generating a particular stochastic weak*-continuous semigroup. It serves to illustrate some of our results.
-
(2015)The purpose of this study is to develop a method for optimizing the data assimilation system of the HIROMB-BOOS -model at the Finnish Meteorological Institute by finding an optimal time interval and an optimal grid for the data assimilation. This is needed to balance the extra time the data assimilation adds to the runtime of the model and the improved accuracy it provides. Data assimilation is the process of combining observations with a numerical model to improve the accuracy of the model. There are different ways of doing this, some of which are covered in this work. The HIROMB-BOOS -circulation model is a 3D-forecast model for the Baltic Sea. The variables forecast are temperature, salinity, sea surface height, currents, ice thickness and ice coverage. Some of the most important model equations are explained here. The HIROMB-BOOS -model at the Finnish Meteorological Institute has a preoperational data assimilation system that is based on the optimal interpolation method. In this study the model was run for a 2-month test period with different time intervals of data assimilation and different assimilation grids. The results were compared to data from five buoys in the Baltic Sea. The model gives more accurate results when the time interval of the data assimilation is small. The thicker the data assimilation grid is, the better the results. An optimal time interval was determined taking into account the time the assimilation takes. An optimal grid was visually determined based on an optimal grid thickness, for which the added time had to be considered as well. The optimized data assimilation scheme was tested by performing a 12-month test run and comparing the results to buoy data. The optimized data assimilation has a positive effect on the model results.
-
(2016)A model in mathematic logic is called pseudo-finite, in case it satisfies only such sentences of first-order predicate logic that have a finite model. Its main part modelled based on Jouko Väänänen's article 'Pseudo- finite model theory', this text studies classic model theory restricted to pseudo-finite models. We provide a range of classic results expressed in pseudo-finite terms, while showing that a set of other well-known theorems fail when restricted to the pseudo-finite, unless modified substantially. The main finding remains that a major portion of the classic theory, including Compactness Theorem, Craig Interpolation Theorem and Lidström Theorem, holds in an analogical form in the pseudo-finite theory. The thesis begins by introducing the basic first-order model theory with the restriction to relational formulas. This purely technically motivated limitation doesn't exclude any substantial results or methods of the first-order theory, but it simplifies many of the proofs. The introduction behind, the text moves on to present all the classic results that will later on be studied in terms of the pseudo-finite. To enable and ease this, we also provide some powerful tools, such as Ehrenfeucht-Fraïssé games. In the main part of the thesis we define pseudo-finiteness accurately and build a pseudo-finite model theory. We begin from easily adaptable results such as Compactness and Löwenheim-Skolem Theorems and move on to trickier ones, examplified by Craig Interpolation and Beth Definability. The section culminates to a Lidström Theorem, which is easy to formulate but hard to prove in pseudo-finite terms. The final chapter has two independent sections. The first one studies the requirements of a sentence for having a finite model, illustrates a construction of a finite model for a sentence that has one, and culminates into an exact finite model existence theorem. In the second one we define a class of models with a certain, island-like structure. We prove that the elements of this class are always pseudo-finite, and at the very end the text, we present a few examples of this class.
-
(2014)Työssä konstruoidaan euklidisen kaksiulotteisen pallonkuoren kanssa melkein varmasti homeomorfinen satunnainen metrinen avaruus, Brownin graafi, ja tarjotaan mahdollinen diskretisaatio pallonkuorelle käyttäen neliötahkoisia tasoverkkoja. Aluksi konstruoidaan Gromovin-Hausdorffin metriikka kompaktien metristen avaruuksien joukkoon. Tämän jälkeen konstruoidaan Corin-Vauquelin-Schaefferin bijektio olennaisesti tason puiden ja neliötahkoisten tasoverkkojen joukkojen välille, missä puiden kaarien ja tasoverkkojen tahkojen lukumäärä on sama kiinnitetty luonnollinen luku ja puiden solmuihin on lisäksi liitetty kokonaisluku. Tämän bijektiivisen vastaavuuden perusteella n-tahkoisten neliötasoverkkojen lukumäärä on helppo laskea kaikille luonnollisille luvuille n. Huomataan, että tasaisesti jakautunut n-tahkoinen neliötasoverkko on satunnaismuuttuja kompaktien metristen avaruuksien avaruudessa ja todetaan, että on mielekästä tutkia näiden satunnaismuuttujien suppenemista jakauman mielessä. Sen jälkeen kun Brownin graafi on konstruoitu esitetään Jean-François Le Gall'n ja Grégory Miermont'n todistama tuore tulos, jonka mukaan Brownin graafi on sopiva skaalaraja diskreeteistä tasoverkoista jakaumien suppenemisen mielessä. Tutkielman lopuksi arvioidaan lyhyesti, kuinka hyvin Brownin graafi kuvaa tasaisesti jakautunutta satunnaista metriikkaa pallon kuorella sekä esitellään aiheeseen liittyviä avoimia ongelmia. Työn motivaationa ovat osaltaan sovellukset kvanttigravitaatioteoriaan.
-
(2023)After 2013, the environmental protection department in China has significantly reduced on-road emission through the upgrade of emission standards, the improvement of fuel quality and economic tools. However, the specific effect of the control policies on emission and air quality is still difficult to quantify. This is mainly due to the data shortage on vehicle emission factors and vehicle activities. In this research, we developed the 2008-2018 on-road emissions inventory based on Emission Inventory Preparation Guide (GEI) and existing vehicle activity database. Our estimates suggest that CO and PM2.5 showed a relatively significant decrease, by 66.2% and 58.8%. However, the trend of NOx (5.8%) and NMVOC (-4.8%) was relatively stable. The Beijing-Tianjin-Hebei (BTH), Yangtze River Delta (YRD), Pearl River Delta (PRD) and Sichuan Basin (SCB) regions all showed a uniform trend especially in NOx. For Beijing-Tianjin-Hebei, the significant decline in NOx might be caused by earlier implementations in emission standard and fuel quality. In addition to this, we designed additional evaporation emission scenarios to verify the application of GEI in quantify emission impact on secondary pollutant (PM2.5 and O3). The results indicate that evaporation emission contributed to Maximum Daily Average 8-hour (MDA8) O3 concentration by about 3.5%, for Beijing, Shanghai and Nanjing. This value can reach up to 5.9%, 5.3% and 7.3%, but the impact on PM2.5is extremely limited. Our results indicate the feasibility of GEI in improving and lowering the technical barrier of on-road emission inventory establishment at the same time and its further application in quantifying on-road emission contribution to air quality. Besides that, it shows a strong potential in on-road policy environmental assessment and short-term air quality assessment.
Now showing items 2631-2650 of 4261