Browsing by Title
Now showing items 26112630 of 4203

(2018)Abelian categories provide an abstract generalization of the category of modules over a unitary ring. An embedding theorem by Mitchell shows that one can, whenever an abelian category is sufficiently small, find a unitary ring such that the given category may be embedded in the category of left modules over this ring. An interesting consequence of this theorem is that one can use it to generalize all diagrammatic lemmas (where the conditions and claims can be formulated by exactness and commutativity) true for all module categories to all abelian categories.\\ The goal of this paper is to prove the embedding theorem, and then derive some of its corollaries. We start from the very basics by defining categories and their properties, and then we start constructing the theory of abelian categories. After that, we prove several results concerning functors, "homomorphisms" of categories, such as the Yoneda lemma. Finally, we introduce the concept of a Grothendieck category, the properties of which will be used to prove the main theorem. The final chapter contains the tools in generalizing diagrammatic results, a weaker but more general version of the embedding theorem, and a way to assign topological spaces to abelian categories. The reader is assumed to know nothing more than what abelian groups and unitary rings are, except for the final theorem in the proof of which basic homotopy theory is applied.

(2016)Pro gradu tutkielman lähtökohta on Helsingin, Espoon, Vantaan ja Pääkaupunkiseudun Kierrätyskeskus Oy:n 4V – Välitä, Vaikuta, Viihdy, Voi hyvin hanke, jonka puitteissa järjestettiin lapsille ja nuorille sarjakuvatyöpajoja vuosina 2008 ja 2009. Tutkimuksen aineiston muodostavat sarjakuvatyöpajoissa tuotetut piirrokset aiheesta onnellinen kaupunki. Tutkimuksen tavoitteena on muodostaa kokonaiskuva lasten onnellisesta kaupungista sarjakuvaaineiston pohjalta ja pohtia visuaaliseen aineiston roolia tutkimuksessa. Tutkimuksen keskiössä ovat lapsi, lapsen tuottama aineisto ja lasta koskeva tutkimus sekä visuaaliseen aineiston havainnointiin liittyvät tekijät. Tutkimus koostuu kahdesta laajemmasta kokonaisuudesta: onnellisen kaupungin representaatiosta ja tutkimusprosessin haasteista. Onnellisen kaupungin representaatiota lähestytään maantieteellisten ulottuvuuksien ja paikkojen kautta. Visuaalisen tutkimuksen haasteita eritellään tutkimusprosessin eri osat kattaen. Pro gradu tutkielman tutkimusote on aineistolähtöinen ja menetelmäpainotteinen. Tutkimuksen ytimen muodostavat laaja sarjakuvaaineisto ja sen analysointiin käytetty sisällönanalyysi. Sisällönanalyysissä yhdistyvät kvalitativiiset ja kvantitatiiviset menetelmät; piirroksissa esiintyvien elementtien havannointi ja kuvailu matriisin ja elementtien suhteellisten osuuksien laskeminen. Tutkimuksessa korostuvat lasten rooli aineiston tuottajana sekä tulkinnalliset ja menetelmälliset haasteet, jotka ovat vaikuttaneet saatuihin tuloksiin. Tutkimuksen perusteella lapsille annettu ohjeistus sarjakuvatyöpajoissa vaikuttaa heidän piirtämäänsä kuvaan onnellisesta kaupungista tuottamalla dualistisen ja arjen paikkoja korostavan kuvan kaupungista. Sarjakuva aineistotyyppinä puolestaan painottaa toimintaa ja henkilöhahmoja rakennetun ja luonnonympäristön sijaan. Nämä taustatekijät huomioon ottaen lasten onnellinen kaupunki on sosiaalinen kaupunki, jossa fyysisen ympäristön rooli on tarjota toimivat puitteet lasten arjen toiminnoille. Fyysinen kaupunki näyttäytyy lapselle mahdollisuutena leikkiin ja harrastuksiin, mutta rakennetun ympäristön rakenteet ja arkkitehtuuri eivät korostu. Onnellinen kaupunki saavutetaan usein kaupunkilaisten yhteisten ponnistelun tuloksena kestävän kehityksen teemojen värittämänä. Lasten onnellinen kaupunki on yhdistelmä erilaisten reaali ja mielikuvitusmaailmasta poimittujen ympäristöjen piirteitä, mistä kertovat piirroksissa toistuvat satuhahmot ja paikat.

(Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2011)The aim of this masters thesis was to examine subjective wellbeing and personal happiness. Empirical study of happiness is part of broader wellbeing research and is based on an idea that the best experts of personal wellbeing are the individuals themselves. In addition to perceptions of personal happiness, the aim was also to acquire knowledge about personal values and components personal happiness is based on. In this study, moving into certain community and the characteristics of neigbourhood contributing happiness, were defined to represent these values. The object was, through comparative casestudy, to obtain knowledge about subjective wellbeing of the individuals in two different residential areas inside metropolitan area of Helsinki. In comparative case study the intention usually is that the examined units represent spesific 'cases' from something broader and therefore the results can be somehow generalized. Consequently the chosen cases in this study were selected due to their image of 'urban village' and thus the juxtapositioning was constructed between secluded postsuburban village and more heterogeneous urban village better attached to existing urban structure. The research questions were formed as follows: Are there any differences between the areas regarding the components personal happiness is based on? Are there any differences between the areas regarding the level of residents subjective wellbeing? Based on the residents assessments, what are the most important characteristics of neighbourhood contributing personal happiness? The data used in order to gain answers to these questions was obtained from internetbased survey questionnaire. Based on the data residents of postsuburban village Sundsberg seem to share highly family oriented set of values and actualizing these values is ensured with high income, wealth and secure work situation. Instead in Kumpula the components of happiness seem place more towards learning and personal development, interesting leisure and hobbies and specially having an influence regarding communal decisions. Concerning subjective wellbeing of residents there can be seen some differences as well. Personal life is experienced a bit more happier in Sundsberg than in Kumpula. People are more satisfied with their personal health and job satisfaction in Sundsberg and additionally feelings of loneliness, inadequancy and frustration are bit more common in Kumpula. Regarding the characteristics of neighbourhood contributing happiness data suggests that key characteristics of area are peacefulness and safely, good location and connections and proximity of parks and recreational areas. These characteristics were considered highly significant in both areas but they were experienced to actualize better in Kumpula. In addition to these components the residents in Kumpula were overall more satisfied with various characteristics contributing happiness in their residential area. Besides these attributes mentioned above residents in Kumpula emphasize also some 'softer' elements connecting into social, functional and communal side of area. From Sundsberg point of view residential area best contributing happiness is child friendly and safe community based on likeminded people who share the same socioeconomical situation. The results of this study can be linked back into the society and metropolitan area, which they were chosen from as a cases to be studied. The results can thereby be seen as an example of differentation of conditions of personal happiness between certain population segments. It is possible to detect an spatial dimension to this process as well and thereby the results suggests that regional segmentation affects between highranking residential areas as well. Thereby the results of this research contributes to the debate on innovative, diverse and dynamic urban area and as well cohesion of metropolitan area and the society in whole.

(2016)In this master's thesis we explore the mathematical model of classical Lagrangian mechanics with constraints. The main focus is on the nonholonomic case which is obtained by letting the constraint distribution to be nonintegrable. Motivation for the study arises from various physical examples, such as a rolling rigid body or a snakeboard. In Chapter 2, we introduce the model and derive the associated equations of motion in several different forms while using the Lagrangian variational principle as a basis for the kinematics. We also show how nonintegrability of the constraint distribution is linked to some external forces via the Frobenius theorem. Symmetric mechanical systems are discussed in Chapter 3. We define the concept for a Lagrangian system with constraints and show how any free and proper Lie group action induces an intrinsic vertical structure to the tangent bundle of the configuration manifold. The associated bundle is used to define the nonholonomic momentum which is a constrained version of the form that appears in the modern formulation of the classical Noether's theorem. One applies the classical Noether's theorem to a symmetric system with integrable constraints by restricting observation to an integral submanifold. This procedure, however, is not always possible. In nonholonomic mechanics, a Lie group symmetry implies only an additional equation of motion rather than actual conservation law. In Chapter 4, we introduce a coordinate free technique to split the Lagrangian variational principle in two equations, based on the Lie group invariance. The equations are intrinsic, that is to say, independent of the choice of connections, related parallel transports and covariant differentiation. The vertical projection, associated to the symmetry, may be varied to alter the representation and shift balance between the two equations. In Chapter 5, the results are applied to the rattleback which is a Lagrangian model for a rigid, convex object that rolls without sliding on a plane. We calculate the nonholonomic momentum and state the equations of motion for a pair of simple connections. One of the equation is also solved with respect to a given solution for the other one. The thesis is mainly based on the articles 'Nonholonomic Mechanical Systems with Symmetry' (A.M. Bloch, P.S. Krishnaprasad, J.E. Marsden, and R M. Murray, 1996), 'Lagrangian reduction by stages' (H. Cendra, J.E. Marsden, and T.S. Ratiu, 2001), 'Geometric Mechanics, Lagrangian Reduction and Nonholonomic Systems' (H. Cendra, J.E. Marsden, and T.S. Ratiu, 2001) and the book 'Nonholonomic mechanics and control' (A.M. Bloch, 2003).

(2012)In this Thesis I present the general theory of semigroups of linear operators. From the philosophical point of view I begin by connecting deterministic evolution in time to dynamic laws that are stated in terms of a differential equation. This leads us to associate semigroups with the models for autonomic deterministic motion. From the historical point of view I reflect upon the history of the exponential function and its generalizations. I emphasize their role as solutions to certain linear differential equations that characterize both exponential functions and semigroups. This connection then invites us to consider semigroups as generalizations of the exponential function. I believe this angle of approach provides us with motivation as well as useful ideas. From the mathematical point of view I construct the basic elements of the theory. First I consider briefly uniformly and strongly continuous semigroups. After that I move on to the more general σ(X, F)continuous case. Here F is a so called norming subspace of the dual X^*. I prove the existence of both the infinitesimal generator S of the semigroup and the resolvent (λ  S)^(1) as well as some of their basic properties. Then I turn to the other direction and show how to create a semigroup starting from its generator. That is the content of the famous Hille—Yosida Theorem. From the practical point of view I give some useful characterizations of the generator in terms of dissipativity and accretivity. These techniques also lead us to an effortless proof of Stone's Theorem on unitary groups. Finally, from an illustrational point of view I give two applications. The first is about multiplicative semigroups on L^p spaces, where the setting is simple enough to allow intuition to accompany us. The second takes on a problem of generating a particular stochastic weak*continuous semigroup. It serves to illustrate some of our results.

(2015)The purpose of this study is to develop a method for optimizing the data assimilation system of the HIROMBBOOS model at the Finnish Meteorological Institute by finding an optimal time interval and an optimal grid for the data assimilation. This is needed to balance the extra time the data assimilation adds to the runtime of the model and the improved accuracy it provides. Data assimilation is the process of combining observations with a numerical model to improve the accuracy of the model. There are different ways of doing this, some of which are covered in this work. The HIROMBBOOS circulation model is a 3Dforecast model for the Baltic Sea. The variables forecast are temperature, salinity, sea surface height, currents, ice thickness and ice coverage. Some of the most important model equations are explained here. The HIROMBBOOS model at the Finnish Meteorological Institute has a preoperational data assimilation system that is based on the optimal interpolation method. In this study the model was run for a 2month test period with different time intervals of data assimilation and different assimilation grids. The results were compared to data from five buoys in the Baltic Sea. The model gives more accurate results when the time interval of the data assimilation is small. The thicker the data assimilation grid is, the better the results. An optimal time interval was determined taking into account the time the assimilation takes. An optimal grid was visually determined based on an optimal grid thickness, for which the added time had to be considered as well. The optimized data assimilation scheme was tested by performing a 12month test run and comparing the results to buoy data. The optimized data assimilation has a positive effect on the model results.

(2016)A model in mathematic logic is called pseudofinite, in case it satisfies only such sentences of firstorder predicate logic that have a finite model. Its main part modelled based on Jouko Väänänen's article 'Pseudo finite model theory', this text studies classic model theory restricted to pseudofinite models. We provide a range of classic results expressed in pseudofinite terms, while showing that a set of other wellknown theorems fail when restricted to the pseudofinite, unless modified substantially. The main finding remains that a major portion of the classic theory, including Compactness Theorem, Craig Interpolation Theorem and Lidström Theorem, holds in an analogical form in the pseudofinite theory. The thesis begins by introducing the basic firstorder model theory with the restriction to relational formulas. This purely technically motivated limitation doesn't exclude any substantial results or methods of the firstorder theory, but it simplifies many of the proofs. The introduction behind, the text moves on to present all the classic results that will later on be studied in terms of the pseudofinite. To enable and ease this, we also provide some powerful tools, such as EhrenfeuchtFraïssé games. In the main part of the thesis we define pseudofiniteness accurately and build a pseudofinite model theory. We begin from easily adaptable results such as Compactness and LöwenheimSkolem Theorems and move on to trickier ones, examplified by Craig Interpolation and Beth Definability. The section culminates to a Lidström Theorem, which is easy to formulate but hard to prove in pseudofinite terms. The final chapter has two independent sections. The first one studies the requirements of a sentence for having a finite model, illustrates a construction of a finite model for a sentence that has one, and culminates into an exact finite model existence theorem. In the second one we define a class of models with a certain, islandlike structure. We prove that the elements of this class are always pseudofinite, and at the very end the text, we present a few examples of this class.

(2014)Työssä konstruoidaan euklidisen kaksiulotteisen pallonkuoren kanssa melkein varmasti homeomorfinen satunnainen metrinen avaruus, Brownin graafi, ja tarjotaan mahdollinen diskretisaatio pallonkuorelle käyttäen neliötahkoisia tasoverkkoja. Aluksi konstruoidaan GromovinHausdorffin metriikka kompaktien metristen avaruuksien joukkoon. Tämän jälkeen konstruoidaan CorinVauquelinSchaefferin bijektio olennaisesti tason puiden ja neliötahkoisten tasoverkkojen joukkojen välille, missä puiden kaarien ja tasoverkkojen tahkojen lukumäärä on sama kiinnitetty luonnollinen luku ja puiden solmuihin on lisäksi liitetty kokonaisluku. Tämän bijektiivisen vastaavuuden perusteella ntahkoisten neliötasoverkkojen lukumäärä on helppo laskea kaikille luonnollisille luvuille n. Huomataan, että tasaisesti jakautunut ntahkoinen neliötasoverkko on satunnaismuuttuja kompaktien metristen avaruuksien avaruudessa ja todetaan, että on mielekästä tutkia näiden satunnaismuuttujien suppenemista jakauman mielessä. Sen jälkeen kun Brownin graafi on konstruoitu esitetään JeanFrançois Le Gall'n ja Grégory Miermont'n todistama tuore tulos, jonka mukaan Brownin graafi on sopiva skaalaraja diskreeteistä tasoverkoista jakaumien suppenemisen mielessä. Tutkielman lopuksi arvioidaan lyhyesti, kuinka hyvin Brownin graafi kuvaa tasaisesti jakautunutta satunnaista metriikkaa pallon kuorella sekä esitellään aiheeseen liittyviä avoimia ongelmia. Työn motivaationa ovat osaltaan sovellukset kvanttigravitaatioteoriaan.

(2023)After 2013, the environmental protection department in China has significantly reduced onroad emission through the upgrade of emission standards, the improvement of fuel quality and economic tools. However, the specific effect of the control policies on emission and air quality is still difficult to quantify. This is mainly due to the data shortage on vehicle emission factors and vehicle activities. In this research, we developed the 20082018 onroad emissions inventory based on Emission Inventory Preparation Guide (GEI) and existing vehicle activity database. Our estimates suggest that CO and PM2.5 showed a relatively significant decrease, by 66.2% and 58.8%. However, the trend of NOx (5.8%) and NMVOC (4.8%) was relatively stable. The BeijingTianjinHebei (BTH), Yangtze River Delta (YRD), Pearl River Delta (PRD) and Sichuan Basin (SCB) regions all showed a uniform trend especially in NOx. For BeijingTianjinHebei, the significant decline in NOx might be caused by earlier implementations in emission standard and fuel quality. In addition to this, we designed additional evaporation emission scenarios to verify the application of GEI in quantify emission impact on secondary pollutant (PM2.5 and O3). The results indicate that evaporation emission contributed to Maximum Daily Average 8hour (MDA8) O3 concentration by about 3.5%, for Beijing, Shanghai and Nanjing. This value can reach up to 5.9%, 5.3% and 7.3%, but the impact on PM2.5is extremely limited. Our results indicate the feasibility of GEI in improving and lowering the technical barrier of onroad emission inventory establishment at the same time and its further application in quantifying onroad emission contribution to air quality. Besides that, it shows a strong potential in onroad policy environmental assessment and shortterm air quality assessment.

(2023)SelfSovereign Identity is a new concept of managaging digital identities in the digital services. The purpose of the SelfSovereign Identity is to place the user in the center and move towards decentralized model of identity management. Verifiable Credentials, Verifiable Presentations, Identity Wallets and Decentralized Identifiers are part of the SelfSovereign Identity model. They have also been recently included in the OpenID Connect specifications to be used with the widely used authentication layer built on OAuth 2.0. The OpenID Connect authentication can now be leveraged with the Decetralized Identifiers (DIDs) and the public keys contained in DID Documents. This work assessed the feasibility of integrating the Verifiable Credentials, Verifiable Presentations and Decentralized Identifiers with OpenID Connect in the context of two use cases. The first use case is to integrate the Verifiable Credentials and Presentations into an OpenID Connect server and utilise Single SignOn in federated environment. The second use case is to ignore the OpenID Provider and enable the Relying Party to authenticate directly with the Identity Wallet. Custom software components, the Relying Party, the Identity Wallet and the Verifiable Credential Issuer were built to support the assessments. Two new authorization flows were designed for the two use cases. The Federated Verifiable Presentation Flow describes the protocol of Relying Party authenticating with OpenID Provider which receives the user information from the Wallet. The flow does not require any changes for any Relying Party using the same OpenID Provider to authenticate and utilise Single SignOn. The Verifiable Presentation Flow enables the Relying Party to authenticate directly with the Wallet. However, this flow requires multiple changes to Relying Party and benefits of federated environment are not available, e.g., the Single SignOn. Both of the flows are useful for their own specific use cases. The new flows are utilising the new segments of the SelfSovereign Identity and are promising steps towards selfsovereignty.

(2018)At the end of the inflationary epoch, about 10^(−12) seconds after the Big Bang singularity, the universe was filled with plasma consisting of quarks and gluons. At some stage the cooling of the universe could have led to the occurrence of firstorder cosmological phase transitions that proceed by nucleation and expansion of bubbles all over the primordial plasma. Cosmological turbulence is generated as a consequence of bubble collisions and acts as a source of primordial gravitational waves. The purpose of this thesis is to provide an overview of cosmological turbulence as well as the corresponding gravitational wave production, and compile some of the results obtained to this day. We also touch on the onset of cosmological turbulence by analysing shock formation. In the onedimensional case considering only rightmoving waves, the result is Burgers’ equation. The development of a power spectrum with random initial conditions under Burgers’ equation is calculated numerically using the Euler method with sufficiently low step sizes. Both in the viscid and inviscid cases, the result is the presence of a −8/3 power law in the inertial range at the time of shock formation.

(2022)Context: Factors that affect software team performance are a highly studied subject. One of the reasons for this is the subject’s meaningfulness to companies and software teams since anyone interested in improving team performance wants to know which factors affect positively on the team performance. What motivated us to do this thesis on this subject was our interest in both software teams and social sciences. Objective: This thesis’s aim was to better understand how the factors selected in our unofficial interviews will affect the software team performance and how big this affect is. These selected factors are psychological safety, team leader’s behaviour and team’s gender diversity. Method: We conducted a literature review with a keyword search. When we needed to specify the search by a factor we used factorrelated words and if needed limit the subject area to computer science. All in all 23 reference papers were selected in the search. Results: Our analysis shows that all of our factors have a positive impact on the performance of the team, though how big this impact is depended on the factor. Psychological safety seems to have the biggest impact while the behaviour of team leader has a decent impact, not huge but not minuscule, lastly the gender diversity of the team has only a very small impact. Conclusions: Ultimately we have concluded that all three chosen factors have a positive effect on software team performance. Though from these three factors, psychological safety and team leader’s behaviour have the most significant impact on software team performance. So for software team leaders, it’s important to pay attention to these two factors, especially since they are even linked to each other.

(2020)This thesis will present the concept of arbitrage and some applications of arbitrage pricing. Arbitrage opportunity means that there is a possibility to make money without any initial investment and without a risk of losing money. To start, some definitions are introduced in the fields of measure theory, probability theory and mathematical finance. Then the guidelines of market models considered throughout the thesis will be defined. The mathematical definition of arbitrage and arbitrage pricing are introduced first in simple setting of one period market model and then in multiperiod market model. As a main result of this thesis are introduced and proven The fundamental theorems of arbitrage pricing. The first fundamental theorem of arbitrage pricing shows that a market is arbitrage free if and only if there exists at least one risk neutral probability measure equivalent to original probability measure such that the discounted prices are martingales with respect to this risk neutral measure. This will be proven for multiperiod market model. The second fundamental theorem of arbitrage pricing shows that the completeness of a market model is equivalent to existence of unique riskneutral probability measure. This will be proven for one period market model. Finally, I look into some investing and hedging strategies replicating payoffs and portfolio insurance. Some examples of commonly used options strategies will be introduced such as butterfly spread and iron condor.

(2012)In this paper we define and study the Julia set and the Fatou set of an arbitrary polynomial f, which is defined on the closed complex plane and whose degree is at least two. We are especially interested in the structure of these sets and in approximating the size of the Julia set. First, we define the Julia and Fatou sets by using the concepts of normal families and equicontinuity. Then we move on to proving many of the essential facts concerning these sets, laying foundations for the main theorems of this paper presented in the fifth chapter. By the end of this chapter we achieve quite a good understanding of the basic structure of the Julia set and the Fatou set of an arbitrary polynomial f. In the fourth chapter we introduce the Hausdorff measure and dimension along with some theorems regarding them. In this chapter we also say more about fractals and selfsimilar sets, for example the Cantor set and the Koch curve. The main goal of this chapter is to prove a wellknown result which allows to easily determine the Hausdorff dimension of any selfsimilar set that fulfils certain conditions. We end this chapter by calculating the Hausdorff dimension of the onethird Cantor set and the Kochcurve by using the result described earlier and notice, that their Hausdorff dimension is not integervalued. In the fifth chapter we study the structure of the Julia set further, concentrating on its connectedness, and introduce the Mandelbrot set. In this chapter we also prove the three main theorems of this paper. First we show a sufficient condition for the Julia set of a polynomial to be totally disconnected. This result, with some theorems proven in the third chapter, shows that in this case the Julia set is a Cantorlike set. The second result shows when the Julia set of a quadratic polynomial of the form f(z) = z^2 + c is a Jordan curve. The third and final result shows that given an arbitrary polynomial f, there exists a lower bound for the Hausdorff dimension of the Julia set of the polynomial f, which depends on the polynomial f. This is the most important result of this paper.

(2020)The purpose of this thesis is to act as a guide for the 2017 article A study guide for the l^2 decoupling theorem by J. Bourgain and C. Demeter. However, this thesis is selfsufficient. The aim has been to give a detailed presentation and handle the weight exponent E especially carefully in the arguments. We begin by presenting the decoupling inequality of the l^2 decoupling theorem and the associated Fourier transform like operator. The theorem concerns finding a satisfactory upper bound for the decoupling constant related to the inequality. We also list some general results that a graduate student might not be very familiar with; among them are a few consequences of Hölder's inequality. We move on to study the properties of the weight functions that we use in the L^pnorms in the decoupling. We present two operator lemmas to which we can reduce many of our arguments. The other lemma gives us the opportunity to use certain Schwartz functions in our proofs. We then move on to prove the l^2 decoupling theorem in the lower range 2<= p <= (2n)/(n1). This includes the definition of multilinear decoupling constants and an iterative process.

(2020)Crowdsourcing has been used in computer science education to alleviate the teachers’ workload in creating course content, and as a learning and revision method for students through its use in educational systems. Tools that utilize crowdsourcing can act as a great way for students to further familiarize themselves with the course concepts, all while creating new content for their peers and future course iterations. In this study, studentcreated programming assignments from the second week of an introductory Java programming course are examined alongside the peer reviews these assignments received. The quality of the assignments and the peer reviews is inspected, for example, through comparing the peer reviews with expert reviews using interrater reliability. The purpose of this study is to inspect what kinds of programming assignments novice students create, and whether the same novice students can act as reliable reviewers. While it is not possible to draw definite conclusions from the results of this study due to limitations concerning the usability of the tool, the results seem to indicate that novice students are able to recognise differences in programming assignment quality, especially with sufficient guidance and well thoughtout instructions.

(Helsingin yliopistoUniversity of HelsinkiHelsingfors universitet, 2005)

(2024)Sums of lognormally distributed random variables arise in numerous settings in the fields of finance and insurance mathematics, typically to model the value of a portfolio of assets over time. In particular, the use of the lognormal distribution in the popular BlackScholes model allows future asset prices to exhibit heavy tails whilst still possessing finite moments, making the lognormal distribution an attractive assumption. Despite this, the distribution function of the sum of lognormal random variables cannot be expressed analytically, and has therefore been studied extensively through Monte Carlo methods and asymptotic techniques. The asymptotic behavior of lognormal sums is of especial interest to risk managers who wish to assess how a particular asset or portfolio behaves under market stress. This motivates the study of the asymptotic behavior of the left tail of a lognormal sum, particularly when the components are dependent. In this thesis, we characterize the asymptotic behavior of the left and right tail of a sum of dependent lognormal random variables under the assumption of a Gaussian copula. In the left tail, we derive exact asymptotic expressions for both the distribution function and the density of a lognormal sum. The asymptotic behavior turns out to be closely related to Markowitz meanvariance portfolio theory, which is used to derive the subset of components that contribute to the tail asymptotics of the sum. The asymptotic formulas are then used to derive expressions for expectations conditioned on lognormal sums. These formulas have direct applications in insurance and finance, particularly for the purposes of stress testing. However, we call into question the practical validity of the assumptions required for our asymptotic results, which limits their realworld applicability.

(2021)Freshwater ecosystems are an important part of the carbon cycle. Boreal lakes are mostly supersaturated with CO2 and act as sources for atmospheric CO2. Dissolved CO2 exhibits considerable temporal variation in boreal lakes. Estimates for CO2 emissions from lakes are often based on surface water pCO2 and modelled gas transfer velocities (k). The aim of this study was to evaluate the use of a water column stratification parameter as proxy for surface water pCO2 in lake Kuivajärvi. BruntVäisälä frequency (N) was chosen as the measure of water column stratification due to simple calculation process and encouraging earlier results. The relationship between N and pCO2 was evaluated during 8 consecutive May–October periods between 2013 and 2020. Optimal depth interval for N calculation was obtained by analysing temperature data from 16 different measurement depths. The relationship between N and surface pCO2 was studied by regression analysis and effects of other environmental conditions were also considered. Best results for the full study period were obtained via linear fit and N calculation depth interval spanning from 0.5 m to 12 m. However, considering only June–October periods resulted in improved correlation and the relationship between the variables more closely resembling exponential decay. There was also strong interannual variation in the relationship. The proxy often underestimated pCO2 values during the spring peak, but provided better estimates in summer and autumn. Boundary layer method (BLM) was used with the proxy to estimate CO2 flux, and the result was compared to fluxes from both BLM with measured pCO2 and eddy covariance (EC) technique. Both BLM fluxes compared poorly with the EC flux, which was attributed to the parametrisation of k.

(2023)This thesis focuses on statistical topics that proved important during a research project involving quality control in chemical forensics. This includes general observations about the goals and challenges a statistician may face when working together with a researcher. The research project involved analyzing a dataset with high dimensionality compared to the sample size in order to figure out if parts of the dataset can be considered distinct from the rest. Principal component analysis and Hotelling's T^2 statistic were used to answer this research question. Because of this the thesis introduces the ideas behind both procedures as well as the general idea behind multivariate analysis of variance. Principal component analysis is a procedure that is used to reduce the dimension of a sample. On the other hand, the Hotelling's T^2 statistic is a method for conducting multivariate hypothesis testing for a dataset consisting of one or two samples. One way of detecting outliers in a sample transformed with principal component analysis involves the use of the Hotelling's T^2 statistic. However, using both procedures together breaks the theory behind the Hotelling's T^2 statistic. Due to this the resulting information is considered more of a guideline than a hard rule for the purposes of outlier detection. To figure out how the different attributes of the transformed sample influence the number of outliers detected according to the Hotelling's T^2 statistic, the thesis includes a simulation experiment. The simulation experiment involves generating a large number of datasets. Each observation in a dataset contains the number of outliers according to the Hotelling's T^2 statistic in a sample that is generated from a specific multivariate normal distribution and transformed with principal component analysis. The attributes that are used to create the transformed samples vary between the datasets, and in some datasets the samples are instead generated from two different multivariate normal distributions. The datasets are observed and compared against each other to find out how the specific attributes affect the frequencies of different numbers of outliers in a dataset, and to see how much the datasets differ when a part of the sample is generated from a different multivariate normal distribution. The results of the experiment indicate that the only attributes that directly influence the number of outliers are the sample size and the number of principal components used in the principal component analysis. The mean number of outliers divided by the sample size is smaller than the significance level used for the outlier detection and approaches the significance level when the sample size increases, implying that the procedure is consistent and conservative. In addition, when some part of the sample is generated from a different multivariate normal distribution than the rest, the frequency of outliers can potentially increase significantly. This indicates that the number of outliers according to Hotelling's T^2 statistic in a sample transformed with principal component analysis can potentially be used to confirm that some part of the sample is distinct from the rest.
Now showing items 26112630 of 4203