Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Sarvola, Inka-Mari (2022)
    Reindeer (Rangifer tarandus L.) is an integral part of ecosystems across the northern boreal regions, and reindeer husbandry is an important socio-cultural factor, especially for indigenous people. Currently, reindeer husbandry in Fennoscandia is confronted with deterioration of pasture areas, and the decreasing of reindeer number has often been offered as a solution. However, in most reindeer herding districts, forestry has also strongly decreased the sustainable production capacity of winter pastures and therefore has had a significant role in pasture deterioration in addition to high reindeer numbers. The interaction between forestry and reindeer husbandry has often been studied qualitatively, ecologically, or with simple bio-economic models from the perspective of forestry. In this thesis, I use a detailed interdisciplinary ecological-economical model to study how the rotation forestry affects the economics of reindeer husbandry. The research questions are 1) how does the length of forest rotation period and 2) the management practices such as soil scarification and leaving of harvesting residues affect the economically optimal reindeer husbandry. I expand a novel ecological-economical reindeer husbandry optimization model to include the effects of forestry on the ground and arboreal lichen with an assumption of normal forest structure. The effects of forestry on the ground and arboreal lichen are based on previous literature. Modern dynamic optimization algorithms are used to solve the model for the optimal number of reindeer, annual net revenues, lichen biomass on pastures, and the level of supplementary feeding under different forest rotation lengths and management scenarios with zero and positive interest rates. The results show that the length of forest rotation period affects the economically optimal solution. When pasture rotation is used, shortening the forest rotation length decreases the optimal number of reindeer, annual income, and the lichen biomass in pastures, but increases the amount of supplementary feed given. When pasture rotation is not used, shortening the forest rotation length decreases the number of reindeer, annual net revenues, and supplementary feeding, but increases the lichen biomass. Soil scarification and harvesting residues lower the annual net revenues of reindeer husbandry by 1-15% depending on the forest rotation length and pasture rotation. The longer is the forest rotation length, the less the annual net revenues are affected by the forest management practices. Higher interest levels lead to higher reindeer numbers and a higher level of supplementary feeding, but also to lower lichen biomass and annual net revenues from reindeer husbandry. The results of this thesis support the earlier findings of negative effects of rotation forestry and short rotation lengths on reindeer husbandry, and estimations that reindeer husbandry is more resilient if pasture rotation is used. As the economical sensibility of rotation forestry in Lapland has currently been questioned, even-aged forestry could offer a solution with the best management scenarios for both parties. The results of this thesis support infinitely long forest rotation length without soil scarification. This thesis also highlights the need for interdisciplinary co-design of ecological studies to ensure that they are suitable for creating complex interdisciplinary optimization models.
  • Volski, Anna (2017)
    The present thesis is a philological study that describes Russian-English-Russian lexicographical works of the period between the sixteenth and the nineteenth century and concentrates on the most noteworthy English-Russian dictionaries. The aim of the thesis is to pay the intellectual debt to the compilers of English-Russian dictionaries of the period from the sixteenth to the end of the nineteenth century, whose works can be considered as inalienable parts of the evolution of the English-Russian lexicography. In total, eleven English-Russian and one Russian-English dictionaries are studied and described. The description of the dictionaries includes the information on the author(s), place and time of publishing, macro- and microstructures of the dictionaries, improvements and novelty as compared to their predecessors. The description of these dictionaries, their structural analysis and the research based on the related works on Russian bilingual lexicography allow to depict the gradual four centuries long process of English-Russian lexicography development. Throughout the evolution of the English-Russian lexicography, it is possible to see how the organization and presentation of word lists have become more uniform, and the amount of information provided for the user grew. A separate chapter is devoted to A. Alexandrow’s Complete English-Russian Dictionary (1891) that has been studied relatively little. Apart from depiction of the dictionary’s macro- and microstructures, transliteration patterns are illustrated, obsolete and archaic words, and lexemes that do not have a separate entry in the Oxford English Dictionary are demonstrated. The presence of these words has been verified against the dictionaries that are mentioned in the preface of Complete English-Russian Dictionary, namely Reiff’s New Parallel Dictionaries of the Russian, French, German and English Languages, Webster’s Complete Dictionary of the English Language and Nuttall’s Standard Pronouncing Dictionary of the English Language. Additionally, Grammatin and Parenogo’s New Dictionary English and Russian and Banks’s Dictionary of the English and Russian Languages have been examined as the possible sources of these lexemes. The comparative analysis of these lexemes shows that despite their absence from the Oxford English Dictionary almost all words can be met in one or several well-known nineteenth century dictionaries. This paper illustrates the gradual evolution of the English-Russian lexicography and shows that despite their absence from the Oxford English Dictionary almost all words can be met in one or several dictionaries analysed.
  • Leino, Nea (2021)
    The aim of this research is to examine the impact of population aging on income inequality in Finland over the time period from 1991 to 2016. The research question is relevant since population aging is a part of reality around the world because of the declining trend in the rate of birth in addition to greater longevity. These vast demographic and socio-economic changes stress the well-being of nations. This study offers some important insights into the discussion of income inequality in Finland as no similar study has been conducted before. Understanding the link between aging and income inequality will help us to direct our attention to where policy decisions might need to be directed if inequality is seen to grow adversely. This study will be carried out by both a decomposition analysis and a shift-share analysis. These methods are commonly used for examining the contribution to inequality of particular characteristics, as they manage to gauge the relative importance of different determinants in overall inequality. These methods will be applied to the traditional inequality measures belonging to the family of generalized entropy (GE), such as the mean logarithmic deviation, Theil’s index, and the half-squared coefficient of variation. The use of multiple different measures in inequality research is recommendable, for they provide information about the distribution from different perspectives, and clarify where in the distribution the change has taken place. In order to study the impact of population aging on income inequality, the population was partitioned into five different age cohorts; 0-39, 40-60, 61-65, 66-70, and 71+, and one- or two-person households were examined in this research. Data for this study was received from the Luxembourg Income Study Database (LIS). Income inequality was investigated by disposable household income, which was equivalized by the square root scale. The decomposition analysis allows us to answer the question of how much of total inequality is attributable to variability in the first subgroup, in the second, etc., and how much to between subgroups. To complement the results from the decomposition analysis, by the shift-share analysis we are able to simulate such a Finland which would have not aged at all since both 1991 and 2000 while other factors remain unchanged at the 2016 level. The results of the decomposition analysis led us to a clear conclusion that variations within groups are much more significant in the formation of total inequality than the variations between groups. In the light of the shift-share analysis, interestingly, the aged Finland is less unequal than the Finland, which would use the population shares of 1991 and 2000. Hence such a study of aging, which only examines changes in population shares ceteris paribus, shows that aging has slowed down the rise of inequality in Finland. This is because the age structure of the years 1991 and 2000 put the most weight on people in most unequal, or second most unequal age group in proportion to other age groups than the population distribution of the year 2016.
  • Hitruhin, Lauri (2012)
    In this paper we define and study the Julia set and the Fatou set of an arbitrary polynomial f, which is defined on the closed complex plane and whose degree is at least two. We are especially interested in the structure of these sets and in approximating the size of the Julia set. First, we define the Julia and Fatou sets by using the concepts of normal families and equicontinuity. Then we move on to proving many of the essential facts concerning these sets, laying foundations for the main theorems of this paper presented in the fifth chapter. By the end of this chapter we achieve quite a good understanding of the basic structure of the Julia set and the Fatou set of an arbitrary polynomial f. In the fourth chapter we introduce the Hausdorff measure and dimension along with some theorems regarding them. In this chapter we also say more about fractals and self-similar sets, for example the Cantor set and the Koch curve. The main goal of this chapter is to prove a well-known result which allows to easily determine the Hausdorff dimension of any self-similar set that fulfils certain conditions. We end this chapter by calculating the Hausdorff dimension of the one-third Cantor set and the Koch-curve by using the result described earlier and notice, that their Hausdorff dimension is not integer-valued. In the fifth chapter we study the structure of the Julia set further, concentrating on its connectedness, and introduce the Mandelbrot set. In this chapter we also prove the three main theorems of this paper. First we show a sufficient condition for the Julia set of a polynomial to be totally disconnected. This result, with some theorems proven in the third chapter, shows that in this case the Julia set is a Cantor-like set. The second result shows when the Julia set of a quadratic polynomial of the form f(z) = z^2 + c is a Jordan curve. The third and final result shows that given an arbitrary polynomial f, there exists a lower bound for the Hausdorff dimension of the Julia set of the polynomial f, which depends on the polynomial f. This is the most important result of this paper.
  • Sinko, Jaakko (2020)
    The purpose of this thesis is to act as a guide for the 2017 article A study guide for the l^2 decoupling theorem by J. Bourgain and C. Demeter. However, this thesis is self-sufficient. The aim has been to give a detailed presentation and handle the weight exponent E especially carefully in the arguments. We begin by presenting the decoupling inequality of the l^2 decoupling theorem and the associated Fourier transform -like operator. The theorem concerns finding a satisfactory upper bound for the decoupling constant related to the inequality. We also list some general results that a graduate student might not be very familiar with; among them are a few consequences of Hölder's inequality. We move on to study the properties of the weight functions that we use in the L^p-norms in the decoupling. We present two operator lemmas to which we can reduce many of our arguments. The other lemma gives us the opportunity to use certain Schwartz functions in our proofs. We then move on to prove the l^2 decoupling theorem in the lower range 2<= p <= (2n)/(n-1). This includes the definition of multilinear decoupling constants and an iterative process.
  • Tiainen, Niina (2017)
    Työni kuuluu analyyttisen ontologian alaan, joka on filosofisen metafysiikan keskeinen osa-alue. Työ käsittelee asiaintilaontologiaa, joka antaa systemaattisen vastauksen kategoriaopin perustavaan kysymykseen: mihin perustaviin entiteetti- eli olioluokkiin oleva jakautuu? Lähtökohtana työssä on käsiterealistinen kanta universaalien ongelmaan. Sen mukaan ominaisuudet ovat universaaleja, toistettavia, ja ne ovat olemassa in rebus, yksilöolioissa. Työ jakautuu alkupuolen historialliseen osuuteen, minkä jälkeen aihetta käsitellään systemaattisesti. Työn keskeisimpiä filosofeja ovat F. H. Bradley, Bertrand Russell ja Gustav Bergmann. Tarkastelen asiaintilaontologiaa kolmen keskeisen kysymyksen kautta. 1) Asiaintilaontologian näkökulmasta yksilöolioiden individuaation ongelma on luontevinta ratkaista olettamalla, että yksilöiden ero on viime kädessä perustava kategoriaalinen ero (eikä siten perustu ominaisuuksien eroon). Individuaatio perustuu siis ”paljaisiin partikulaareihin”. Paljailla partikulaareilla on asiantilaontologiassa myös toinen rooli: ne ovat predikaation viimekätisiä subjekteja. Oletusta paljaista partikulaareista on usein kritisoitu sillä perusteella, että koko käsite on jollakin tavalla ristiriitainen. Argumentoin, että tämä näkemys ei ole perusteltu. 2) Eksemplifikaatiolla tarkoitan partikulaarien sekä niiden ominaisuuksien ja relaatioiden välistä suhdetta. Tutkin tätä kysymystä historiallisessa kontekstissa, jossa tarkastelun kohteena ovat Bradleyn käsitykset relaatioista ja relatiivisuudesta sekä Bradleyn kuuluisa ”regressioargumentti” relaatioita vastaan. Tässä yhteydessä tarkastelen myös Bergmannin näkemystä eksemplifikaation luonteesta. 3) Asiaintilaontologiassa eksemplifikaatio liittyy kiinteästi kysymykseen asiaintilojen luonteesta: asiaintilat ovat rakenneosiensa muodostamia ykseyksiä (”unities”), mutta ne eivät kuitenkaan suoranaisesti palaudu rakenneosiinsa. Tätä ”ykseyden ongelmaa” ei aina ole riittävästi erotettu ”konstituenttien ongelmasta”, joka koskee rakenneosien kategoriaalista luonnetta. Bradleyn ja Bergmannin näkemysten tarkastelu johtaa näiden kahden ongelman täsmälliseen erottamiseen. Tämä on työni tärkein ontologinen tulos. Argumentoin, että ykseyden ongelmaan voidaan vastata, jos oletetaan, että asiantilat ovat metafyysisesti perustavia suhteessa rakenneosiinsa, eli rakenneosat ovat ”abstraktioita” sanan perinteisessä filosofisessa merkityksessä. Käsitys vaatii yksityiskohtaista muotoilua, mutta sen avulla voidaan vastata eräisiin asiantilan käsitettä koskeviin perustaviksi tarkoitettuihin kritiikkeihin. Lisäksi argumentoin, että Bergmannin eksemplifikaatio -relaation käsitteen avulla saadaan selvyyttä kokonaisuuden ja osien väliseen suhteeseen ja että sen avulla voidaan siten vastata konstituenttien ongelmaan. Argumentoin, että jos oletamme eksemplifikaatiorelaation olevan primitiivinen, asymmetrinen relaatio sekä yksi asiantilojen konstituenteista, saamme asiantilaontologiassa perustan universaalien ja partikulaarien erottelulle.
  • Lammassaari, Pasi (2013)
    Corporate credit risk in fixed income markets refers to risk that debt issuing company will default before the maturity of the debt or to decrease in the market value of debt due to decreasing credit quality. A number of quantitative credit risk models have been developed to measure probability of default and/or credit spreads of fixed income investments. These models can be roughly divided into two categories based on their approach to credit risk modelling; structural and reduced form models. Several commercial applications have been developed based on both model branches and used in financial markets as tool for analyzing real life investment decisions. The aim of this thesis is to introduce the theoretical framework behind structural and reduced form credit risk models and present a comparative analysis on the strengths and weaknesses of different models. A base case model for structural (Merton 1974) and for reduced form (Jarrow-Turnbull 1995) is presented in more detail and differences are discussed based on earlier academic research in the area. Even the objectives of the models are similar the two approaches are totally different from the theoretical point of view. Structural models are based on Merton´s model and use Black- Scholes option pricing framework as foundation of credit risk analysis. Reduced form models instead can be considered as a statistical approach to credit risk modelling using market data on bond prices and credit spreads to measure probability of default. Empirical part of the thesis consists of review of several empirical studies testing the empirical soundness of different structural and reduced form models. The aim of this part is to find reasoning to recommend either structural or reduced form models for investor use. Main findings suggest that when choosing credit risk model the purpose of the use becomes a decisive factor. It can be argued that structural models in general are more suitable for analyzing credit risk of individual companies with company specific needs due to their ability to offer economic causality. In the other hand the reduced form models could be recommended to use for trading and hedging purposes for traders and credit managers working with liquid bond markets. Reduced form models are highly data sensitive and need high quality market data. Several studies suggest that structural models perform better when working with lower rated bonds (below investment grade) whereas reduced form models are more suitable for higher rated bonds (investment grade). Reduced form models can be seen as more modern approach to credit risk modelling.
  • Pirttinen, Nea (2020)
    Crowdsourcing has been used in computer science education to alleviate the teachers’ workload in creating course content, and as a learning and revision method for students through its use in educational systems. Tools that utilize crowdsourcing can act as a great way for students to further familiarize themselves with the course concepts, all while creating new content for their peers and future course iterations. In this study, student-created programming assignments from the second week of an introductory Java programming course are examined alongside the peer reviews these assignments received. The quality of the assignments and the peer reviews is inspected, for example, through comparing the peer reviews with expert reviews using inter-rater reliability. The purpose of this study is to inspect what kinds of programming assignments novice students create, and whether the same novice students can act as reliable reviewers. While it is not possible to draw definite conclusions from the results of this study due to limitations concerning the usability of the tool, the results seem to indicate that novice students are able to recognise differences in programming assignment quality, especially with sufficient guidance and well thought-out instructions.
  • Vataja, Maria (2016)
    South Africa is one of the largest asylum seekers receiving countries in the world. Refugees in South Africa are often unable to enjoy their legal rights and freedoms because of the wide-spread xenophobia and public institutions’ unawareness of the refugees’ rights in the country. Forced migrants face multiple challenges to earn and maintain a sustainable livelihood in South Africa. Self-reliance has become an important goal in humanitarian aid organizations’ attempts to enhance the economic and social empowerment of the refugees. Various self-reliance programs aim to develop and strengthen the refugees’ livelihoods and reduce their vulnerability and long-term reliance on humanitarian aid. The research focuses on Cape Town Refugee Center’s Self-reliance Program that aims to enhance the economic security of the forced migrants in the region. The program gives small business grants to refugees and asylum seekers to set up a business of their own. The research seeks to find out how the program works in practice and what effects the program has had on the economic security of the beneficiaries. The research looks into the biggest challenges of the forced migrants to set up a business of their own and reach self-reliance in Cape Town. The research seeks to find out what factors lead to the success or failure of the forced migrants’ businesses and how xenophobia affects the forced migrants’ possibilities to reach self-reliance. In addition, the research seeks to find out how the concept of self-reliance is understood by the forced migrants and by the self-reliance officer of the Refugee Center. The research material was collected during a six weeks long fieldwork period in Cape Town. The research was carried out by using qualitative research methods. Sixteen beneficiaries of the Self-reliance Program and the self-reliance officer of the Cape Town Refugee Center were interviewed by semi-structured interviews. Content analysis was used as a method of analysis of the interviews. The Self-reliance Program’s beneficiaries’ experiences are compared with theoretical discussion of migration and livelihood research as well as with previous studies of the forced migrants’ experiences in South Africa. In addition, the research looks into South Africa’s refugee situation and policy as well as into the issue of xenophobia in the country. The research shows that besides material assistance the Self-reliance program offers empowering mental coaching for the beneficiaries. According to the research the social networks and the location of the business site are crucial for the success of the business, whereas the lack of support and social networks, insufficient business knowledge and having the business in an unsafe are the main reasons for the failure of the businesses. The biggest obstacles for the forced migrants to reach self-reliance in Cape Town are the documents, xenophobic attitude of people and public institutions, lack of support and safety, lack of information and possibilities to education as well as the difficulty to find employment in the formal sector. The main conclusion of the research is that the self-reliance programs for the forced migrants should highlight the importance of social networks, the safe location of the business site as well as offer good business training for the beneficiaries. South Africa’s refugee policy should be transformed to make it easier for the forced migrants to enjoy their legal rights and to be able to reach economic independence. Instead of depicting forced migrants as an economic threat to the country, the focus should be shifted on what value the forced migrants can offer to South Africa with their skills and know-how.
  • Nikkilä, Miia (2013)
    This master’s thesis study examines the new type of public management in Finland brought about by the introduction of the ideals of new public management internationally.Specifically, the ideal of self-responsibility was examined in the Finnish context by focusing on the aftercare of short-term prisoners. The study focused on the question of how does the prevailing public management in Finland effect the situation of released short-term prisoners, and specifically, how does it affect their aftercare measures? In order to provide an answer to this, the study sought answers to three questions: 1) what problems are there in the aftercare of short-term prisoners in Finland and what are the consequences of these problems; 2) how is the aftercare of short-term prisoners divided between the government, the municipality, and the third sector and who is responsible for providing aftercare services; 3) how does the public sector responsibilitize released short-term prisoners and, if so, what kind of problems does this cause them in relation to their aftercare? This study was conducted by using a qualitative multi-sited ethnographic approach that consisted of using different sets of data. First, governmental laws and policies regarding imprisonment and social welfare were used as secondary or background data. Second, ethnographic fieldwork was undertaken at a non-profit organisation along with ethnographic interviews in order to collect valuable insight into the topic along with some first-hand experience. Third, thematic interviews with short-term prisoners, the staff of the non-profit organization, and with governmental social workers were conducted. Lastly, ethnographic observations during the thematic interviews were undertaken. The findings of this study suggest that the influences of new public management and its ideal of responsibilitization are visible in the aftercare of short-term prisoners. There seems to be a move towards necessitating short-term prisoners to take responsibility for their own matters already during their time in prison, but specifically after their release. On top of this, these individuals are expected to actively demonstrate a motivation for change in order to be entitled to receive services due to the lack of resources and a move away from a needs-based service provision. Problems to do with a decentralised service provision and the way in which short-term prisoners are not viewed as a group necessitating specialised services also lead towards the above stated situation. Crucially, the responsibility of the government or the municipality to provide services for this group of individuals is being shifted towards the third sector in a way that the third sector has become the ‘problem solver’ of the Finnish society. The lack of resources, however, that is also prevalent within the third sector has an influence on individuals in that they are expected to take on increasing amounts of responsibilities for their own aftercare. The study concludes that further research is needed in relation to the situation of the aftercare of short-term prisoners – and prisoners in general – to fully understand the way in which new public management and its ideals affect this issue and how the welfare renewals recently suggested by the government influence the situation of this marginal and problematic group of the population.
  • Sauli, Joose Mikko Juhani (2013)
    The study addresses an estimation problem faced by a large borrower, such as a government, related to an interest rate risk measure known as Cost-at-Risk, or CaR. The term denotes a threshold level of debt costs such that the actual debt costs incurred in a given time (say, one year) will be less than this threshold level with a given probability (say, 95 %). The main obstacle to determining CaR is that the probability distribution of future levels of interest rates is unknown. For this purpose, various models of the term structure of interest rates have been developed. This study takes one particular term structure model, the Longstaff–Schwartz model, under examination in order to determine its inherent suitability for the estimation of CaR. The model is an affine two-factor equilibrium model with analytic solutions for bond and option prices. The accuracy of the model is studied by simulating interest rate pseudo-data using a simulation program which corresponds exactly to the model’s assumptions, and then recalibrating the model to the pseudo-data. Given that the properties of the data-generating process (DGP) are known exactly, this approach allows us to compare the CaR estimates implied by the recalibrated model against the CaR implied by the actual properties of the DGP. Particular attention is paid to the methods by which the model is calibrated to data. In an effort to improve the accuracy of the Longstaff–Schwartz model, a new calibration method is developed. In order to appraise the accuracy of the Longstaff–Schwartz model, we compare its performance to that of a simpler benchmark model based on the Nelson–Siegel decomposition of the yield curve. The accuracy of the CaR estimates given by the two models is compared both in an environment where the DGP is of the Longstaff–Schwartz type, and in another environment where the DGP is of the Nelson–Siegel type. The results of the comparison can be summarized as follows. The new calibration method for the Longstaff–Schwartz model is highly accurate when the DGP is of the LS type, but is useless in the NS-type environment. When the LS model is calibrated using the technique proposed by Longstaff and Schwartz themselves, it turns out that the model gets no advantage from being correctly specified. When correctly specified, it fails to calibrate to data in about 25% of all cases, but when it is misspecified, the failure rate drops to 2.4%, and its accuracy improves and surpasses that of the NS model. These results are highly unexpected. It is possible that they are specific to the parameter values used in the simulations, but this issue is left for further research.
  • Myller, Mika (Helsingin yliopistoUniversity of HelsinkiHelsingfors universitet, 2005)
  • Vartiainen, Pyörni (2024)
    Sums of log-normally distributed random variables arise in numerous settings in the fields of finance and insurance mathematics, typically to model the value of a portfolio of assets over time. In particular, the use of the log-normal distribution in the popular Black-Scholes model allows future asset prices to exhibit heavy tails whilst still possessing finite moments, making the log-normal distribution an attractive assumption. Despite this, the distribution function of the sum of log-normal random variables cannot be expressed analytically, and has therefore been studied extensively through Monte Carlo methods and asymptotic techniques. The asymptotic behavior of log-normal sums is of especial interest to risk managers who wish to assess how a particular asset or portfolio behaves under market stress. This motivates the study of the asymptotic behavior of the left tail of a log-normal sum, particularly when the components are dependent. In this thesis, we characterize the asymptotic behavior of the left and right tail of a sum of dependent log-normal random variables under the assumption of a Gaussian copula. In the left tail, we derive exact asymptotic expressions for both the distribution function and the density of a log-normal sum. The asymptotic behavior turns out to be closely related to Markowitz mean-variance portfolio theory, which is used to derive the subset of components that contribute to the tail asymptotics of the sum. The asymptotic formulas are then used to derive expressions for expectations conditioned on log-normal sums. These formulas have direct applications in insurance and finance, particularly for the purposes of stress testing. However, we call into question the practical validity of the assumptions required for our asymptotic results, which limits their real-world applicability.
  • Bergström, Malin (2017)
    The research in this thesis examines how history is processed and communicated in the comics of Will Eisner (1917–2005) and Charles M. Schulz (1922–2000). The thesis examines the uses of history in Eisner’s graphic novel To The Heart of the Storm (1991) and in Schulz’s cartoon strip Peanuts (1950–2000). Because of the extensive Peanuts material, I have chosen to solely focus on the strips that were published between the years 1950 and 1975. The analysis is primarily conducted within Peter Aronsson’s uses of history theoretical framework, but also includes other theoretical perspectives, such as Hayden Whites’s narratological approach to history, and Jan Assmann’s principles on cultural memory. As a methodological foundation for the study, I have used Michael F. Scholz’s model for history research, which considers the different aspects of comics, from creator and time of production to the contents of the narratives. Because Eisner and Schulz worked with two different formats of the same medium, the analysis has been conducted from a comparative, as well as a content-analytical approach. The analysis is divided into two case studies, where the first case study focuses on the autobiographical aspects of the work and its relationship to the cultural memory of the U.S. military draft, and the second case study approaches the different socio-political tensions that are raised in the comics, such as the notion of Othering, and the depicted political climates. As a versatile storytelling medium, comics can offer a variety of cross-discursive communicative solutions to engage the reader in their depicted stories. In terms of historical content, the medium’s visual and narratological solutions can reveal deeper historical contexts to the initial image shown in the panel. Eisner’s graphic novel depicts realistic imagery from the immigrant tenements of New York City in the 1930s, while Schulz’s strip characterises a mythical idyllic suburbia, where the historical elements are primarily portrayed through the metaphors and tropes that are embedded into the story arcs. The research shows that despite working within different genres and formats, the narratives characterise similar historical environments and social circumstances to their designated readerships. These characterisations can be tied to larger historical contexts, such as cultural memories, and can thereby contribute to a collective historical consciousness. Both comics narratives significantly focus on the notion of the collective memory of the draft, but equally raise issues such as religion and Communism. Milieus and atmospheres are important components in the overall depiction of history in the narratives, and much of the historical aspects are conveyed through a series of personal experiences, which the protagonists process in the story arcs. The personal impressions within the stories in turn strengthen the legitimacy of the narratives and contribute to the authenticity of the portrayed histories. Comics can be considered a form of cultural expression that conveys experience, social standards, and contemporary thought processes, to a broad and varied readership. By demonstrating how the uses of history is utilised in Eisner’s and Schulz’s work, the research also exemplifies how history can be presented and communicated within comics narratives. The sources show Eisner’s and Schulz’s perspectives on American wartime and cultural history, depicting societal structures and conventions, as well as political milieus. The uses of history in the narratives therefore contribute to new historical perspectives on the mid-twentieth century American home front.
  • Heikkilä, Siiri (2019)
    It is general industry practice to attach penalty and liquidated damages clauses to, for example, construction and supply contracts as well as non-compete clauses and confidentiality or non-disclosure agreements. The subject matter of this research project is the use and treatment of such penalty and liquidated damages clauses under Finnish and English laws. Contract terms constituting penalty and liquidated damages clauses are generally enforceable under Finnish law, while English law distinguishes between unenforceable penalty clauses and enforceable liquidated damages clauses. Therefore, the objective of this research project is to, through an examination and comparison of the subject matter, rethink penalty and liquidated damages clauses by looking past the enforceable/unenforceable divide, as it may not be as explicit as seems. Three points are made regarding the acute practical relevance of the subject matter: pervasiveness; balancing of interests in contractual relationships; and, not least, legal certainty. This research project contains seven chapters. Each chapter is built upon the discussion in the preceding chapters, rendering the structure both logical and methodologically viable. Chapter 1 introduces the subject matter, objectives and rationales for the carrying out of this research project as is. Chapter 2 describes, in brief, the methodological choices made over the course of this research project. The first substantive chapter, Chapter 3, presents the legal nature, functions and classification of penalty and liquidated damages clauses to facilitate their examination and comparison. Chapters 4 and 5 examine the use and treatment of penalty and liquidated damages clauses under Finnish and English laws respectively. Chapter 6 compares the use and treatment of such clauses under both approaches through an attempt to, if not answer, at least review each of the research questions set out in Chapter 1. Chapter 7 concludes. Functional comparative law methodology was chosen for the examination and comparison of the subject matter because of interest are particularly the prevailing solutions to the balancing of interests in contractual relationships, an exercise that arises when judges engage in the interpretation of contract terms. Such exercise entails, for example, the weighing of pacta sunt servanda and the principles of individual autonomy and freedom of contract against weaker party protection. Therefore, on one hand, the Finnish and English law approaches each recognize the intention of the parties to a contract as the starting point for the interpretation of contract terms. On the other hand, both approaches have in place a legal rule or practice that ensures weaker party protection. Under Finnish law, penalty and liquidated damages clauses are subject to review by judges under the adjustment mechanism set out in section 36 of the Finnish Contracts Act, while under English law, the same is possible under the penalty rule.
  • Arola, Aleksi (2021)
    Freshwater ecosystems are an important part of the carbon cycle. Boreal lakes are mostly supersaturated with CO2 and act as sources for atmospheric CO2. Dissolved CO2 exhibits considerable temporal variation in boreal lakes. Estimates for CO2 emissions from lakes are often based on surface water pCO2 and modelled gas transfer velocities (k). The aim of this study was to evaluate the use of a water column stratification parameter as proxy for surface water pCO2 in lake Kuivajärvi. Brunt-Väisälä frequency (N) was chosen as the measure of water column stratification due to simple calculation process and encouraging earlier results. The relationship between N and pCO2 was evaluated during 8 consecutive May–October periods between 2013 and 2020. Optimal depth interval for N calculation was obtained by analysing temperature data from 16 different measurement depths. The relationship between N and surface pCO2 was studied by regression analysis and effects of other environmental conditions were also considered. Best results for the full study period were obtained via linear fit and N calculation depth interval spanning from 0.5 m to 12 m. However, considering only June–October periods resulted in improved correlation and the relationship between the variables more closely resembling exponential decay. There was also strong inter-annual variation in the relationship. The proxy often underestimated pCO2 values during the spring peak, but provided better estimates in summer and autumn. Boundary layer method (BLM) was used with the proxy to estimate CO2 flux, and the result was compared to fluxes from both BLM with measured pCO2 and eddy covariance (EC) technique. Both BLM fluxes compared poorly with the EC flux, which was attributed to the parametrisation of k.
  • Flinck, Jens (2023)
    This thesis focuses on statistical topics that proved important during a research project involving quality control in chemical forensics. This includes general observations about the goals and challenges a statistician may face when working together with a researcher. The research project involved analyzing a dataset with high dimensionality compared to the sample size in order to figure out if parts of the dataset can be considered distinct from the rest. Principal component analysis and Hotelling's T^2 statistic were used to answer this research question. Because of this the thesis introduces the ideas behind both procedures as well as the general idea behind multivariate analysis of variance. Principal component analysis is a procedure that is used to reduce the dimension of a sample. On the other hand, the Hotelling's T^2 statistic is a method for conducting multivariate hypothesis testing for a dataset consisting of one or two samples. One way of detecting outliers in a sample transformed with principal component analysis involves the use of the Hotelling's T^2 statistic. However, using both procedures together breaks the theory behind the Hotelling's T^2 statistic. Due to this the resulting information is considered more of a guideline than a hard rule for the purposes of outlier detection. To figure out how the different attributes of the transformed sample influence the number of outliers detected according to the Hotelling's T^2 statistic, the thesis includes a simulation experiment. The simulation experiment involves generating a large number of datasets. Each observation in a dataset contains the number of outliers according to the Hotelling's T^2 statistic in a sample that is generated from a specific multivariate normal distribution and transformed with principal component analysis. The attributes that are used to create the transformed samples vary between the datasets, and in some datasets the samples are instead generated from two different multivariate normal distributions. The datasets are observed and compared against each other to find out how the specific attributes affect the frequencies of different numbers of outliers in a dataset, and to see how much the datasets differ when a part of the sample is generated from a different multivariate normal distribution. The results of the experiment indicate that the only attributes that directly influence the number of outliers are the sample size and the number of principal components used in the principal component analysis. The mean number of outliers divided by the sample size is smaller than the significance level used for the outlier detection and approaches the significance level when the sample size increases, implying that the procedure is consistent and conservative. In addition, when some part of the sample is generated from a different multivariate normal distribution than the rest, the frequency of outliers can potentially increase significantly. This indicates that the number of outliers according to Hotelling's T^2 statistic in a sample transformed with principal component analysis can potentially be used to confirm that some part of the sample is distinct from the rest.
  • Snapir, Daniel (2022)
    Tutkielmassa perehdytään kahteen ranskalaiskirjailija Marcel Aymén novelliin, “Les Sabines” ja “Les bottes de sept lieues”, pyrkimyksenä selvittää, millaisia arvoja, uskomuksia ja asenteita novellien sisäistekijä ilmaisee ja kuinka ne novelleissa ilmenevät. Arvoilla, uskomuksilla ja asenteilla tarkoitetaan tässä asiayhteydessä erilaisia käsityksiä ihmisestä, ihmiselämästä ja maailmasta sekä oikeasta ja väärästä näiden piirissä. Tutkimus keskittyy siten novellien etiikkaan. Sen teoreettinen kehys on kirjallisuuden retorinen lähestymistapa, jonka valinta on luonteva, sillä vähintäänkin James Phelan ja Wayne C. Booth ovat merkittävissä teoksissaan käsitelleet retorisesta näkökulmasta eettisiä kysymyksiä huomattavalla tarkkuudella ja syvällisyydellä. Työssä tarkastellaan ensin novellia “Les Sabines” ja perehdytään niihin moninaisiin merkityksiin, joita novellissa ilmaistaan sen päähenkilön, Sabinen, yliluonnollisen erityiskyvyn kautta; näihin merkityksiin kuuluvat inhimillisen erehtyväisyyden ironisointi sekä järjellisen, taianomaisuudet kieltävän todellisuuskäsityksen esittäminen riittämättömänä. Tämän perästä siirrytään käsittelemään sitä, kuinka novellissa kritisoidaan ironian keinoin siveysmoraalia sekä sitä, kuinka sisäistekijä ironisen tason tuolla puolen ohjaa lukijaa novellin henkilöhahmoihin, tapahtumiin ja seksuaalisuutta koskeviin arvoihin suhtautumaan. Toista novellia, “Les bottes de sept lieues”, tarkastellaan ensin henkilöhahmojen välisten suhteiden näkökulmasta. Esiin nousee se, kuinka päähenkilö Germaine esitetään ihailtavana vastoin hänen kehnoa sosiaalista asemaansa ja muiden henkilöhahmojen tapaa suhtautua häneen. Edempänä eritellään novellin lapsihahmojen näkökulmaa sekä sitä, kuinka sisäistekijä ilmaisee lasten näkökulmaa puolustavansa ja kunnioitavansa. Tämän novellin analyysia tukee perehtyminen niihin intertekstuaalisiin suhteisiin, joihin novelli asettuu satujen, etenkin Charles Perrault’n sadun “Le Petit Poucet” kanssa. “Les Sabines” ja “Les bottes de sept lieues” edustavat erilaisia taipumuksia Marcel Aymén tuotannossa ja niiden analyysit täydentävät toisiaan. Toisaalta novelleista osoitettuja arvoja sekä aatteita yhdistää se, miten ne asettuvat vastaan yleistä, porvallista ja sosiaalisesti ehdollistettua arvo- ja kokemusmaailmaa. Tutkimus saa nimensä J. Robert Loyn väitteestä, jonka mukaan Aymé vihjaa lukijalleen, että on olemassa toinen, erilainen todellisuus. Tuon ayméläisen todellisuuden hallitsevia arvoja ovat kahden novellin analyysin valossa uteliaisuus, viattomuus, mielikuvitus, hellyys ja nöyryys.