Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Davies, Caelum John (2020)
    Where is best? Much like the pay-for-access services, profiteering, and mystery that in-part defines the nation brand ranks that form the subject of this work; cross my palm with enough money and it might just be you when the results of this work’s index are revealed! Provocation aside; the concepts of nation branding and nation brands have quickly entered the spotlight of the world’s stage since Anholt first coined the term in 1996. Quickly, it has become big business. From Cool Britainia to ESTonia, nations have been quick in ‘corporatising’ their image to gain attraction and favour around the world. This work is not interested in the brands created by countries per say, rather it is interested in a country’s brand strength, that is how effective countries are in achieving the goals they set out to accomplish through their branding efforts. This work is not the first to be interested in such a thing, for within a decade of Anholt coining the term, he had developed a rank to measure and compare the strength of nation’s brands himself. Jump forward to 2020 and the world has multiple such organisations - often consultancy firms - seeking to do the same through the development of their own ranks. This work seeks to cast a critical eye over these ranks, developing an index of European country brand strength itself. Specifically, this work does three things. Firstly, it provides an understanding of ‘nation brand’ from a country level perspective, generating its findings based on literature (and lack of literature) from thirty-five countries. Secondly, it critically assesses the success and failures of nine prominent nation brand ranks, and in doing so draws from outside literature on University ranking and ranking in general. Thirdly, the crux of the work. Based on the findings gleaned from the previous aim’s outcomes, it develops an original index of country brand strength that is less analytically flawed than its comparators. Through the building its own index of country brand strength, a more holistic understanding of the challenges of indexing and ranking is developed, whist also evidencing that at least some of the shortcomings of its comparators can be overcome. This undertaking is done following OECD guidance, and inspired by the 2010 work of Marc Fetscherin. To compliment its aims, the work provides a detailed discussion on key interlinked and underlying concepts including soft power, geoeconomics, and globalisation. The index is not without fault, failing one test of soundness, but it does yield that Denmark, the Netherlands, Austria, Sweden, Ireland and Estonia share the strongest country brands within the EU. The ranks it casts a critical eye over are not without fault either, with the biggest problems reviled to be those of black boxing, subjectivity in surveying, and enablement of misinterpretation through presenting only rank positions of countries, and not index scores.
  • Smirnov, Stefan (2015)
    This study examines the position of Scottish Gaelic on the Isles of Lewis and Harris on the Outer Hebrides of Scotland. The research questions are: 1) in which circumstances is Scottish Gaelic spoken; 2) what affects the current situation of Scottish Gaelic; 3) what linguistic attitudes do people who live on Lewis and Harris have towards both Gaelic and English? The data was collected by a research questionnaire. The survey gathered 56 respondents in total, of whom 31 were from the Isle of Lewis and 25 from the Isle of Harris.
  • Ruotimo, Jussi-Pekka (2019)
    Pro gradu -tutkielmassani tarkastellaan David Foster Wallacen (1962–2008) novellikokoelman Brief Interviews with Hideous Men (1999) vakuuttelevaa ja suostuttelevaa retoriikkaa osana sukupuolten välistä vallankäyttöä. Aineisto koostuu fiktiivisistä haastattelusta, joista lyhimmät ovat alle sivun mittaisia ja pisimmät useita kymmeniä sivuja. Haastatteluiden luonteet ja tapahtumat eroavat paljon toisistaan: materiaali on kohosteisen sporadista. Käytän Arja Jokisen sommittelemaa diskurssianalyysin mallia retorisen aineiston erittelemiseen. Yleisemmin retorista universumia valotetaan Michel Foucault’n, Michael Billigin, Chaim Perelmanin sekä Wayne C. Boothin näkemyksien kautta. Tutkielmani aiheen taustalla on sen ajankohtaisuus. Niin kutsutun Me too -liikkeen myötä ajallemme on tullut ominaiseksi seksuaalinen häirintä sekä sukupuolten väliset jännitteet. Taustoitan tutkielmassani ilmiön syntyä ja sen moninaisuutta, ja tarjoan esimerkkejä Wallacen novelleista koskien arkista vakuuttelevaa ja suostuttelevaa retoriikkaa, joka ilmenee myös vallankäyttönä. Osin novelleja on myös luettu turhankin tarkoitushakuisen feministisellä agendalla. Tutkielmassani näytetään, kuinka Wallace monimutkaisen retorisen verkon luoden onnistuu satirisoimaan vallalla olevaa diskurssia. Brief Interviews with Hideous Meniin keskittyvä tutkimus on noussut jalustalle aivan viime aikoina, siitä esimerkkinä on niin kutsutun Wallace Studies -oppikokonaisuuden syntyminen. Esseekokoelmat A Companion to David Foster Wallace (2013) ja The Cambridge Companion to David Foster Wallace (2018) ovat tasoittaneet tietä omalle tutkimuksellenikin. Kehystän tutkimustani myös novellimuodon taustoittamisella ja tarkastelulla. Osoitan, että Wallace on tietoinen novellistiikan perinteistä ja historiasta, sekä sen nykyaikaisemmista konventoista. Siten hän kykenee luomaan erityislaatuisen muodon, jolle ehdotan nimeä haastattelumainen monologi. Sille ominaista on tietokirjallisuusmaisen aineksen sekoittaminen fiktioon. Linjaan, että Wallacen käyttämä muoto mahdollistaa hänen temaattisen lähestymistapansa. Ensisijainen tutkimuskysymykseni on, millaisia retorisia keinoja Wallace käyttää Brief Interviews with Hideous Menissä. Kaiken kaikkiaan tutkielmassani osoitetaankin, että Wallacen retoristen keinojen kirjo on poikkeuksellisen laaja. Siten luennassani Brief Interviews with Hideous Men näyttäytyy eräänlaisena retorisena pelinä kauttaaltaan: se on itsessään satiiri sukupuolten välisten suhteiden diskursseista.
  • Syrjänen, Emmi (University of HelsinkiHelsingin yliopistoHelsingfors universitet, 2014)
    Epilepsia on yksi yleisimpiä kohtauksia aiheuttavista kroonisista neurologisista sairauksista, johon eläinlääkärit törmäävät työssään. Epilepsian perinnöllisyyttä on tutkittu usealla eri koirarodulla, ja tähän mennessä on löydetty kymmenen rodun epilepsian aiheuttajageenit. Whippetien idiopaattisesta epilepsiasta ei löydy tieteellisiä julkaisuja, mutta rodun yksilöitä on ollut mukana yksittäisissä epilepsiatutkimuksissa. Kirjallisuuskatsauksessa käsitellään koirien epilepsiaa yleisesti, epilepsian perinnöllistä taustaa koirilla ja ihmisillä, perinnöllisten sairauksien geenitutkimusta sekä whippetiä rotuna ja lyhyesti myös rodun perinnöllisiä sairauksia. Kokeellinen osuus koostuu kolmesta erilaisesta osa-alueesta: kyselytutkimus, sukupuun arviointi sekä ehdokasgeenitutkimus. Kyselytutkimuksen avulla määritettiin 31 whippetin idiopaattisen epilepsian kohtauskuva. 77 % whippetien kohtauksista määriteltiin yleistyneiksi ja 23 % paikallisiksi epilepsiakohtauksiksi. Epilepsiaa sairastavissa whippeteissä esiintyi huomattavasti enemmän uroksia (65 %) kuin narttuja (25 %). Kohtausten keskimääräinen alkamisikä oli 3,2 vuotta, ja 32 % whippeteistä oli epilepsian pitkäaikaislääkityksellä. Yhdeksi mahdolliseksi whippetien idiopaattisen epilepsian periytymismalliksi arvioitiin sukupuun perusteella X-kromosomaalinen väistyvä periytyminen. Ehdokasgeenitutkimuksessa tutkittiin ihmisillä tunnetun X-kromosomaalisen epilepsiageenin SLC6A8:n assosiaatiota whippetien epilepsiaan. Tutkimukseen valittiin 24 epilepsiaa sairastavaa whippetiä ja näille 24 tervettä verrokkia. Assosiaatiota tutkittiin yhden emäsparin vaihteluiden (SNP) avulla. Assosiaatiota whippetien epilepsialla ja tutkituilla yhden emäsparin vaihteluilla ei havaittu. SLC6A8-geenin assosiaation tutkimiseksi seuraava mahdollinen tutkimus olisi koko SLC6A8-geenin sekvensointi. Whippetien epilepsian periytyminen voi mahdollisesti olla myös monitekijäistä, kuten monella muulla rodulla on todettu. Koska whippetien kohtaukset jakautuivat selkeästi kahteen eri kategoriaan, on sekin mahdollista, että epilepsian taustalla olisi kaksi geneettisesti täysin erilaista tautia. Whippetien idiopaattista epilepsiaa aiheuttavan geenin tai geenien löytämiseksi vaaditaan vielä lisätutkimuksia, esimerkiksi koko perimänlaajuista kartoitusta.
  • Rautamies, Kerttu (2016)
    Tutkin väkevien alkoholijuomien mainontaa Yhdysvaltalaisissa aikakauslehdissä. Lähdeaineistona käytän lehtiä Life ja Ebony, joista ensimmäinen edustaa valtavirran valkoista amerikkalaisuutta ja tälle kohderyhmälle suunnattua mainontaa, ja jälkimmäinen mustalle vähemmistölle kohdennettua alkoholimainontaa. Tutkimuk-sen mielenkiinnon kohteena on väkevien alkoholijuomien printtimainonta: miten mainonta eroaa kyseisissä lehdissä ja tapahtuuko tässä muutosta tarkastelujakson, vuosien 1960–1968 välillä. Tutkielmani aineistona ovat pääasiassa suuret, kokosivun mainokset. Käytän tutkielmassani laadullisia menetelmiä, sisällönanalyysiä ja historiallista laadullista lähestymistapaa joiden pohjalta teen tulkintoja kuvallisesta mainosaineistosta. Vertailen Lifen ja Ebonyn antamaa kuvaa siitä, millaisia juomia ajanjaksona kulutettiin ja millaisen kuvan mainostajat halusivat tuotteistaan välittää. Otan huomioon historiallisen ajanjakson erityispiirteet, keskittyen erityisesti mustan väestön erikoisasemaan ja kansalaisoikeustaistelun etenemiseen. Pääpaino tutkimuksessa on kuitenkin mainosten välittämässä kuvassa 1960–luvun yhteiskunnasta. Väkevien alkoholijuomien mainonta erosi toisistaan, kun vertailin kahta eri aikakauslehteä. Eroja oli juomien laadussa ja mainosten sisällössä. Valkoisille keskiluokkaisille kuluttajille suunnatussa lehdessä ei näkynyt juurikaan vähemmistöjen edustajia, ei myöskään mustia amerikkalaisia. Usein sama mainos oli tehty uudes-taan vähemmistön lehteen, luoden eri mallien avulla samaistuttavia kohteita mustille kuluttajille. Kansalaisoikeustaistelu ja mustien poliittisen ja yhteiskunnallisen aseman paraneminen näkyi mainosaineis-tossa hitaasti. Muutamat integroidut mainokset, siis mainokset joissa esiintyi sekä mustia että valkoisia, olivat poikkeuksia joita esiintyi vasta käsittelemäni ajanjakson loppupuolella. Näitä mainoksia esiintyi sekä Lifessä että Ebonyssä. Mainostajat muuttivat toimintatapojaan hitaasti, taustalla tähän vaikuttivat sekä taloudelliset että yhteiskunnalliset syyt. Poliittiset edistysaskeleet mustien kansalaisoikeuksien paranemisen suhteen nä-kyivät tutkimassani aineistossa vain vähäisissä määrin.
  • Kari, Niina Susanna (2021)
    This thesis evaluates the utility of Stephen Gill’s concept of new constitutionalism in understanding the functioning of the European Economic and Monetary union (EMU) in 2015– 2020. New constitutionalism is defined as legally locked in rules-based market discipline. The thesis has two research questions: 1) To what extent does the concept of new constitutionalism aptly describe the economic policy space of EMU member states? That is, to what extent the EMU, with its associated rules and criteria, constrains the economic policies of its member states? 2) To what extent are new constitutionalist principles “locked in” in the EMU? That is, to what extent is new constitutionalism really constitutional in the EMU? Chapter 1 outlines Gill’s contributions on new constitutionalism in the EMU, defines the key terms and introduces the research problematic. Chapter 2 evaluates the effects of EMU membership on economic policy space. This chapter draws extensively on data on fiscal deficits and levels of public debt well as the fiscal stimulus packages implemented in the context of the corona pandemic in EMU states and other advanced economies. It is shown that although average levels of debt have been lower in EMU states than in advanced economies overall, none of the major member states have remained within the required limits for public debt in this period. This renders the applicability of the concept of new constitutionalism dubious. Similarly, although EMU states stimulated their economies less than other advanced economies in response to the corona pandemic, this was not due to pressure from EU institutions as Gill’s argument would suggest. Chapter 3 addresses the political contingency in the application of rules-based market discipline in the EMU. Most notably, beyond the extreme case of Greece in 2015, there has been a reluctance by the European Commission to discipline member states for breaking the fiscal rules. The last two sections of this chapter consider the extent to which the actions of the European Central Bank have undermined market discipline in the EMU, and the pertinent question of whether the corona pandemic has ushered in a fundamental change in economic thinking in the EU. Drawing on this analysis, it is argued that the concept of new constitutionalism most aptly described the situation faced by Greece in 2015, but its applicability has subsequently waned. Specifically, the concept of new constitutionalism fails to capture the political contingency and flexibility in the application of both fiscal rules and market discipline in the EMU. Chapter 4 is dedicated to the second research question – that is, to what extent new constitutionalist principles are constitutionalised in the EMU. This chapter analyses the practical possibilities for EMU reform. However, given the unanimity requirement between member states for fundamental reform of the EMU, it is here that Gill’s argument about new constitutionalism being “locked in” is found to have the most applicability. The overall argument of this thesis is that the original Maastricht vision of the EMU has not come to fruition. In fact, the EMU experience illustrates that attempts to impose fixed visions of order onto social systems tend to produce disorder. While the eurozone crisis itself was generated by unbalancing tendencies inherent in the EMU, it has subsequently survived only through circumvention of its rules.
  • Saksala, Teemu Sakari (2013)
    Pro gradu -tutkielman varsinaisena tavoitteena on johtaa Hassler Whitneyn vuonna 1936 alkujaan todistama hänen nimeään kantava Whitneyn upotuslause, jonka mukaan jokainen sileä monisto on diffeomorfinen euklidisen avaruuden jonkin alimoniston kanssa. Työn ensimmäisessä luvussa tutustutaan kolmeen analyysin keskeiseen tulokseen. Nämä ovat Käänteiskuvauslause, Implisiittifunktiolause ja Astelause. Nämä lauseet käsittelevät sileiden kuvausten lokaalien käänteiskuvausten olemassaoloa eri ulotteisissa lähtö- ja maaliavaruuksissa. Näitä kolmea lausetta käytetään seuraavien kolmen luvun aikana useasti. Toisessa luvussa tutustutaan sileiden monistojen teorian perusteisiin. Tässä luvussa määritellään muun muassa seuraavat käsitteet: kartta, sileä kuvaus, tangenttikimppu ja -avaruus, tangenttikuvaukset, immersio sekä alimonisto. Luvun lopussa perehdytään monistojen parakompaktisuuteen ja kehitetään työkaluja, joilla saadaan olemassaolevien karttojen avulla aikaiseksi hyödyllisempiä karttoja. Kolmannessa luvussa osoitetaan, että jokainen kompakti sileä monisto voidaan upottaa sileästi euklidiseen avaruuteen. Tämä antaa aiheen tarkastella tarkemmin euklidisten avaruuksien alimonistojen ominaisuuksia. Lopuksi löydetään tarkka arvo sille, miten moniulotteiseen avaruuteen sileät kompaktit monistot voidaan sileästi upottaa. Työn neljäs luku aloitetaan jatkamalla monistojen parakompaktisuustutkimusta. Luvun ensimmäisen kappaleen tavoitteena on luoda sileitä ykkösen osituksia. Luku jatkuu monistojen nollamittaisten joukkojen käsitteen määrittelemisellä. Luvun lopussa käytetään aikaisemmin luotuja työkaluja hyödyksi ja osoitetaan, että jokaista sileää kuvausta sopivaulotteiseen euklidiseen avaruuteen voidaan approksimoida sileillä injektiivisillä immersioilla. Tämän tuloksen avulla työn päätulos voidaan todistaa kohtalaisen helposti.
  • Majlander, Sini (2015)
    Liability of online operators for user-generated content is a topical issue in Europe. The case of Delfi AS v. Estonia, which is currently pending before the Grand Chamber of the European Court of Human Rights, is one example of a situation where questions related to this issue are analysed. The First Section judgment finding the online news portal operator, Delfi, liable for defamatory user-generated comments was widely criticized. Moreover, other topical issue in Europe regards dissemination of cyber hate. This thesis combines these two elements and seeks to answer the following question: Which online entities, if any of them, are liable for the dissemination of user-generated cyber hate? The analysis is limited to the approach of the Council of Europe, although the rules and principles adopted in the United States are referred to due to their trendsetter status. Freedom of expression is protected by Article 10 of the European Human Rights Convention. According to the rules and principles adopted in the framework of the Council, this Article does not protect ‘hate speech’ or its online version ‘cyber hate’. However, there exists no clear definition of ‘hate speech’. In the strategies adopted by different Council bodies and in the case-law of the Court, several categories of speech have been considered as ‘hate speech’. However, this practice has been neither clear nor consistent. This is especially regrettable noting that, according to the Court’s case-law, ‘hate speech’ can be categorically excluded from the protection of Article 10 by using the probation of the abuse of rights clause provided for in Article 17 of the Convention. In the course of this research, I come to oppose said the application of said Article due to the unnecessary risks it poses on the enjoyment of freedom of expression. Moreover, I strongly endorse the adoption of a legally binding definition of the central notion. Concerning liability issues, in the case-law of the Court, the media has been afforded special protection under Article 10. On the other hand, this protection is coupled with responsibilities. Therefore, professional journalists have been held liable even for dissemination of third-party content. The central elements analysed by the Court when imposing such liability in printed media cases have been the amount of editorial control and the intent of the journalist. Due to their functions, some online operators have been assimilated to these traditional media actors. They are considered content providers. However, so-called Internet service providers are a category of online operators regarded as intermediary or auxiliary entities. These entities enjoy a limited liability regime. Again, the key in making this distinction between content providers and ISPs is the amount of editorial control over the content the respective entity hosts, transmits or allows to access. Extensive control over information is coupled with wider liability. In the case of Delfi, the First Section of the Court concluded that due to the amount of control practiced by Delfi, it was to be regarded as a content provider. I agree with the main parts of the Court’s analysis. Furthermore, the liability related principles adopted by the Court in this case can be applied in relation to ‘cyber hate’ cases, although the criminal nature of these cases allows the primary liability to be imposed also on the actual authors of the content. I consider that in order for the Council’s fight against ‘hate speech’ to be effective, additional liability should be imposed on content providers and, in specific circumstances, even on ISPs. I endorse the mobilization of and co-operation with the relevant private sector actors to form guidelines on self-regulatory measures they could apply in order to comply with their duties. Accordingly, the suggested answer to the question posed in the beginning of this research is that all online operators can be liable for user-generated ‘cyber hate’ in case they neglect their respective responsibilities. In the future, the aim of the Council should be to try to hinder any attempts by these entities to rely on the so-called wilful blindness. However, any liabilities imposed must be assessed on case-by-cases basis, taking the circumstances of the specific cases into account, and respecting the inherent principles of Article 10 of the Convention.
  • Uimonen, Jenni (2020)
    This thesis studies Facebook founder Mark Zuckerberg’s discourse on connectivity in the context of Free Basics. As a specific focus, this paper looks at a Facebook connectivity initiative called Internet.org. The initiative was launched in 2013 and it aims at connecting all of the world’s population to the internet. As a part of Internet.org, Facebook developed a smartphone application called Free Basics. As mobile data can be costly in many less developed countries, Free Basics provides free internet access to a limited number of websites. These usually include categories such as Facebook, news sites, job listings, weather and health information. As of 2018, the application was active in over fifty countries around Asia, Africa and Latin America. The method used for analysing the data set is framing analysis. The data, which consists of 54 text documents published between 2013 and 2018, is collected from a single source, an American database called The Zuckerberg Files. This thesis finds that Zuckerberg frames connectivity and Free Basics in three different ways. The first frame, Free Basics as altruistic philanthropy, shows how Zuckerberg focuses on downplaying any possible business benefits that Facebook might have from Free Basics. He stresses the charitable nature of the connectivity initiative and claims that Facebook simply acts on the deep belief for their mission: connecting everyone in the world. The only possible economic profit, according to Zuckerberg, could be for the partnering telecommunications companies. The second frame, Free Basics for universal benefits, displays Facebook’s global outlook on the connectivity issue. In this discourse, Zuckerberg imagines Free Basics as an all-encompassing solution for the five billion people who are currently unconnected. He also argues for universal benefits from increased connectivity by referring to the “global knowledge economy”, where even the already connected people can gain from the new ideas that can now be shared through the internet. The third and last frame, Free Basics accelerating development, looks at Zuckerberg’s statements on how Free Basics can help people in developing countries improve their lives. In comparison to the second frame, here Zuckerberg uses individual people’s stories to give examples on all the areas Free Basics can be helpful in. These stories tie into themes of development, such as health and education, and Zuckerberg frames Free Basics and connectivity as simple, first-step fixes to a variety of issues. In conclusion, the results of this study seem to be in line with the previous studies on Zuckerberg’s discourse. Many elements discussed in the literature also occurred in my data: Facebook’s desire to appear neutral, the debate on net neutrality as well as the giant technology companies and their profound belief in technological determinism in development have been widely discussed earlier. By critically studying Zuckerberg’s argumentation, we gain a better understanding of the company’s actions and motives. This research is valuable because it uses a unique data set to provide an outlook to the way in which Zuckerberg frames Free Basics, as well as connectivity in general.
  • Rydbeck, Axel (2018)
    One of the indisputable characteristics of any market is that they are and always will be to a certain extent speculative. When the securities markets are speculative, they will include investors who are not intrinsically concerned with the powers of supply and demand that shape the fundamental value of a security. Instead, speculative investors are engaged in investing where the modus operandi relies on a change in the price of a security which is either insufficient in regard or excessive to a change in its fundamental value. As most any phenomena on the securities markets, the net benefit or cost of speculative investing is the subject of extensive debate in financial research. Proponents high-light the increase in market liquidity often associated with speculative investing, as well as its potential for increasing the informational efficiency of the markets. In contrast, opponents argue that speculators are the culprits behind inefficiencies and potentially devastating price bubbles. Due to the ambiguous nature of speculative investing, little consensus exists regarding when it could be considered either benevolent or malignant. The matter of speculative investing assumes a further theoretical dimension when we begin to consider regulating the securities market. More precisely, speculative investing raises interesting questions for both the regulatory approaches towards continuous issuer disclosure and insider trading regulation. Obligating continuous issuer disclosure and prohibiting insider trading are to their nature two very different regulatory tools with two very different purposes: the former seeks to extend the amount of relevant information on the market whilst the purpose of the second is to, at the end of the day, limit investing on the basis of relevant information. When we add in the rogue element of speculative investing, interesting questions arise as to when regulation will be hard-pressed to fulfil their respective purposes. Challenges occur especially when regulation attempts to supply information regarding the fundamental value of a security to the market mechanism but might end up back-firing when the market mechanism is gripped by potentially harmful elements of speculative investing. These theoretical conjectures move into the plane of the very much tangible, as we begin to consider the content of the current EU approach towards regulating disclosure and insider trading through the Market Abuse Regulation (MAR, EU 596/2014). In the MAR, the foundational approaches towards both forms of regulation are manifested in the uniform definition of inside information. This means that any theoretical assessments regarding the appropriate dynamic between speculative investing and disclosure vis- à-vis insider trading regulation, need to be necessarily fit into the framework offered by de lege lata or risk resulting in unsatisfactory or legally untenable outcomes. This thesis seeks to investigate, expand and highlight potential challenges between the two forms of regulation and the phenomena of speculative investing. In order to do this, it adopts a law and economics method with a core constructed around the maxim of market-efficiency. The ambition of the thesis is constructed by three principal purposes. Firstly, the thesis will examine when speculative investing contains elements potentially harmful to market efficiency. Secondly, the thesis will establish what consequences this has for obligations on continuous issuer disclosure and how this relates to prohibitions against insider trading. Thirdly, the thesis will evaluate the MAR approach towards regulating issuer disclosure and insider trading in light of what has been established. In order to form comprehensive understanding of the above, this thesis seeks to answer the following three research questions: 1. What are elements to speculative investing which can possibly cause harm to the market mechanism? 2. When will issuer disclosure likely lead to speculative investing which causes harm to the market mechanism? 3. How does the above relate to the purposes of insider trading regulation?
  • Sfakiotakis, Säde (2016)
    Immigration to Finland has seen a significant rise since the 1990s. The adaptation of immigrants to the society is a relevant and popular theme in public discussion. This study explores the opportunities of immigrants to adapt to and become members of the Finnish society, as reported by Finnish respondents. Acculturation is an extensively studied field, but it has rarely been studied through social markers of the receiving society, or through the opinions and attitudes of the native majority population. In this study, a new angle is introduced through the analysis of acculturation opportunities for immigrants from the perspective of the native Finnish. The study was carried out with a quantitative method. The SPSS tool was used for handling data. The data material has been gathered with a questionnaire sent to students at the University of Helsinki and at Aalto University (N=198). The outline of the questionnaire is borrowed from that of a larger research project, where Singapore, Canada and Japan are studied in addition to Finland. The central research questions are: first, who fits in, secondly, is it possible to fit in, and thirdly, what factors predict why some people are more reluctant than others to accept immigrants as parts of the ingroup. The theoretical background of the study is based on John W. Berry’s acculturation studies. The theoretical models used are the Interactive Acculturation Model (IAM) and the Relative Acculturation Extended Model (RAEM), which have been derived and elaborated from Berry’s work by other researchers. In order to support the analysis and discussion of results, a number of hypothetical models have been used, such as G.W. Allport’s contact hypothesis, the similarity attraction hypothesis and the culture distance theory. The results were analyzed through the creation of three dimensions of acculturation, i.e. sociocultural adaptation, socioeconomic adaptation, and social psychological adaptation. The results indicate that Finns set greatest expectations for acculturation in the social psychological dimension, followed by the sociocultural dimension and lastly by the socioeconomic dimension. For the most part, Finns are confident that immigrants can achieve these expectations with relative ease regardless of the dimension of acculturation. In addition, the study found that certain factors, such as greater perceived threats, explained greater expectations of acculturation.
  • Gyldén, Sara (2020)
    In a global competition for resources, differentiation and visibility are key elements for winning. Even countries are not exempt from the efforts of creating a positive image for themselves. This favorable positioning in comparison to other countries is reached through planned branding efforts This Thesis focuses on studying a city brand of Seoul, the capital of South Korea. The aim is to discover whether the city brand of Seoul presented on YouTube by official place marketers, such as the Seoul Tourism Organization (STO) and the Seoul Metropolitan Government (SMG), differs from the city brand presented through user-generated content (UGC) created by the residents of the city. As a city brand consists of city perceptions of several diverse stakeholder groups, the differences and similarities between the Seoul presented on the promotional materials and the user-generated content have an impact on the city brand of Seoul. The research method used is qualitative video content analysis. The study includes a total of 59 videos, of which 28 are user-generated content on YouTube and the rest are official promotional videos of the Seoul Tourism Organization (STO) and the Seoul Metropolitan Government (SMG). The analysis of these videos is based on six primary categories and 24 subcategories, constructed from existing frameworks created by Beerli and Martin; Aaker; Anholt; and Margolis and Pauwels. As a result, four major differences in the projection of Seoul city brand between UGC and the promotional videos are found: representation of different seasons, nature as a tool, diversity of the city, and shopping and café culture as experiences. Additionally, five minor differences include family-orientation; emphasizing events; the focus of food and cuisine; public amenities, public transportation and getting to places; and prices. Furthermore, six major similarities, as well as two minor similarities are found: connection of nature and urban life, social media-readiness, coexistence of history and modern day, coexistence of people, editorial choices, vitality of the city, overcrowding, and safety. The more commonalities between the place marketer videos and the videos created by the stakeholders, the more cohesive, interesting, unique, and accepted city brand is possibly built. If the UGC and the promotional videos only had differences, the Seoul city brand would likely not be recognized or accepted by the city’s stakeholders and could damage the already existing city brand. The found similarities indicate that the place marketers and internal stakeholders of Seoul share perceptions of Seoul city identity to an extent where a strong city brand can be built. Additionally, the found differences indicate that the place promoters have made decisions on which stakeholder groups they wish to cater to more than the others. This is good, since lack of consistency and an effort to suit all target audiences simultaneously leads to diluting and weakening the brand.
  • Morikawa, Merit (2021)
    Aims. Psychological determinants of work have become ever so relevant over the recent years. According to self-determination theory, the fulfillment of the basic psychological needs, i.e., the experiences of autonomy, relatedness, and competence, is paramount for occupational well-being and vigor. Regardless, psychological need satisfaction has but preliminary been studied from a person-centered perspective. This study aims to differentiate psychological need satisfaction subgroups from a working population, study potential membership predictors and examine the subgroup association with occupational well-being. Method. The participants of this study (n = 2 000) were from a sample of Finnish workers, collected as a part of a research project funded by the Finnish Work Environment Fund. Psychological need satisfaction scores were utilized in a bifactor form, formulated with confirmatory factor analysis. With them, psychological need satisfaction profiles were formed with latent profile analysis. Multinomial logistic regression analysis was utilized for studying group membership predictors, including job crafting, workload, and demographic factors. Finally, the subgroups were compared with analyses of variance for work engagement and burnout as determinants of occupational well-being. Results and conclusions. Five distinct psychological need satisfaction profiles could be differentiated from the sample. As in previous studies, the most prevalent profile group was the Globally satisfied profile group, which had superior well-being in terms of work engagement and burnout. The profile group signified by least well-being was the Globally dissatisfied yet competent profile group. Demographic factors, job crafting, and workload all predicted membership to subgroups. The global level of need satisfaction was most clearly associated with occupational well-being measures. The results support the idea that practitioners should pay attention to the balanced satisfaction of the basic psychological needs in work organizations.
  • Tikkanen, Aino (2020)
    This thesis sets out to investigate what frames are used in the U.S. media to discuss responsibility for climate change. Particularly, the study seeks to identify what frames are used to discuss action for climate change mitigation. The normative framework for analyzing responsibility is established by the social connection model by Iris Marion Young, which presents a forward-looking approach for addressing responsibility for issues of structural injustice. The theoretical framework of this thesis derives from existing literature on climate change, the media, and media framing. The study was conducted using a qualitative method of frame analysis. Data for the study was collected from the digital contents of three popular news media outlets in the United States: CNN, Fox News Channel, and The New York Times. The data consists of news articles that were published online in December 2019. The results of the study indicate that responsibility for climate change mitigation is rarely approached directly in the media. Rather, it is implied through discussions about what actions should be taken. The study identifies four main frames of responsibility. The first frame emphasizes the conflict between the younger and older generations and deems that collective efforts are required to address the situation. The second frame accentuates the political division over the issue of climate change by casting blame upon Asian nations while downplaying the respective responsibility of the United States. Similarly, the efforts of the Democratic party are ridiculed. The third frame emphasizes consumer action through practical efforts but does not promote buying less as a possible solution. Lastly, the study identifies a frame, in which corporate responsibility is approached in two ways: to hold highly polluting industries accountable and to promote green business as a solution. The study finds that the framing employed by Fox News Channel emphasizes the economic disadvantages of climate change mitigation and sees it as an issue of causal responsibility for Asian nations. On the other hand, the findings of the study suggest that the media coverage of the youth protests against climate change often yield notions of collective responsibility and frame the issue of responsibility in a more contextualized setting. The findings of the study support existing research of how media frames the issue of climate change and how polarization affects the framing. Through the application of the social connection model, the findings of this study contribute to the literature of news framing of climate change by demonstrating how the issue of responsibility is framed.
  • Hietanen, Heikki (2016)
    This thesis is a reading of the Book of Revelation where the text’s relationship to both the Roman Empire and empires in general is evaluated. As it becomes clear that the author views the Roman Empire of his time in negative terms, two categories are used in evaluating the nature of his critique. When he opposes the Roman empire with patterns and rhetoric that are similar to the pattern of empires, his views are classified as alter-empire. When empire is resisted with something profoundly different, the term anti-empire is applied. In order to make such a categorization possible, this thesis begins by establishing central terminology and ultimately the definition of empire as a concept. Here, the guidelines are provided by the central postcolonial theorists and those biblical scholars who have applied postcolonial approaches in their works. Empire is not defined as a monolith that is but more in the terms of what it does. This concept is then used in evaluating the Roman imperial discourse, the “official” way of understanding the world and human agency in it in the time when the Book of Revelation was written. The comparison reveals how the Roman imperial discourse fits the pattern of empire and provides context for the discourse presented in Revelation. This discourse emphasizes the binary opposition of adherence to God and accommodation to the Roman discourse. What is happening on earth is a mirror image of the celestial battle between God and his adversaries. Thus all forms of compromise with the surrounding normalcy are branded as idolatrous and condemnable. His audience is encouraged to “patiently endure” and “not to be deceived” into participation in Rome’s discourse. The seemingly unlimited power of Rome will soon be revealed as pretention, when God decides to end the time he has “allowed” for Rome and his other enemies before everyone will be judged and a new order established. This judgment reveals the author’s disregard for titles, family connections and earthly might. All human beings are called to personal adherence to God, and this witness is the only condition on which an individual’s fate is decided. John is also adamant in denying violence as an acceptable agency for human beings, even if it has a major role as God’s tool in the establishment of his kingdom. These are the major anti-empire-aspects in the Book of Revelation. For the most part, the work aligns itself more along the pattern of alter-empire. Victory over enemies establishes God’s hegemony. God’s superior might and violence grants him the right to rule. The presently marginalized “saints” will share this rule, and their opponents will be destroyed. This seemingly clear-cut binarism is ultimately undermined by ambivalence, when even the final chapters seem to contain hints of blurred boundaries. Such a failure in dualistic discourses is also a typical feature of an empire.
  • Wang, Hao (2008)
    Anabaena is a common member of the phytoplankton in lakes, reservoirs and ponds throughout the world. This is a filamentous, nitrogen-fixing cyanobacterial genus and is frequently present in the lakes of Finland. Anabaena sp. strain 90 was isolated from Lake Vesijärvi and produces microcystins, anabaenopeptilides and anabaenopeptins. A whole genome shotgun sequencing project was undertaken to obtain the complete genome of this organism in order to better understand the physiology and environmental impact of toxic cyanobacteria. This work describes the genome assembly and finishing, the genome structure, and the results of intensive computational analysis of the Anabaena sp. strain 90 genome. Altogether 119,316 sequence reads were generated from 3 genomic libraries with 2, 6 and 40 kb inserts from high throughput Sanger sequencing. The software package Phred/Phrap/Consed was used for whole genome assembly and finishing. A combinatorial PCR method was used to establish relationships between remaining contigs after thorough scaffolding and gap-filling. The final assembly results show that there is a single 4.3 Mb circular chromosome and 4 circular plasmids with sizes of 820, 80, 56 and 20 kb respectively. Together, these 4 plasmids comprise nearly one-fifth of the total genome. Genomic variations in the form of 79 single nucleotide polymorphisms and 3 sequence indels were identified from the assembly results. Sequence analysis revealed that 7.5 percent of the Anabaena sp. strain 90 genome consists of repetitive DNA elements. The genome sequence of Anabaena sp. strain 90 provides a more solid basis for further studies of bioactive compound production, photosynthesis, nitrogen fixation and akinete formation in cyanobacteria.
  • Supponen, Mirjam (2016)
    Today’s global economy has turned into an information economy in which data has become the new commodity. Data can be transferred internationally in a split second. In the process, jurisdictional lines have become blurry, and companies struggle to understand which rules apply to their data processing activities. Jurisdiction in the Internet realm is one of the most complex challenges, especially concerning issues related to data privacy. Different jurisdictions understandably attempt to extend the application of their data privacy rules as far as possible in order to safeguard their citizens’ rights. However, this practice of extraterritoriality has revealed a deeper privacy conflict between the European and American privacy traditions. The EU started its most recent privacy challenge with the US by affirming the so-called “right to be forgotten” in Google Spain SL v. Agencia Española de Protección de Datos (Case C-131/12 May 13, 2014), a decision by the Court of Justice of the European Union (“CJEU”) in May 2014. The decision represents one of the most far-reaching applications of European data privacy laws and one of the first attempts to regulate privacy online in the information age. This and subsequent CJEU decisions have made it increasingly difficult for companies to avoid EU data privacy laws. This thesis begins by exploring context relevant to the greatly differing privacy discussions occurring in the EU and the US – the two most influential privacy jurisdictions of the world. Next, this thesis will examine the legal implications of the Google Spain decision, especially regarding the decision’s territorial scope and impact. The purpose of this thesis is to explore when companies are obligated to follow the European rules on data privacy after Google Spain and related cases, including the widening definition of “establishment” under Article 4(1)(a) of the DPD. Finally, this thesis will consider new ways of defining the extraterritorial effect of EU data privacy law. Although this research primarily assumes an EU perspective, many aspects are relevant to US and international entities.
  • von Pfaler, Lauri (2020)
    This thesis investigates the history and consequences of the post-WWII naturalisation of capitalism. It draws centrally on social history of political thought, an approach to intellectual history developed by Ellen Meiksins Wood and Neal Wood, and situates the transformations that turned economic history into neoclassically-oriented historical economics ‒ the most fundamental example of naturalisation in the period under investigation ‒ in their wider socio-political context. The aim is to understand the politics of concept-formation and discipline reconstruction. The thesis presents the commercialisation model, the central naturalising account of the origins of capitalism. It equates capitalism with trade, markets, and towns, and explains its emergence circularly by capitalist phenomena and dynamics. Capitalism becomes universal, a naturalised and expected development that is only impeded by political or cultural fetters. In contrast, the thesis claims that capitalism is a historically specific arrangement of social relations, norms, and practices. The characteristics that are both specific and have been historically central to it are account for by a brief history of their unintended emergence as a result of class conflicts in the medieval English countryside. The thesis then considers the absence of capitalism as an analytical and historical concept in the specialised discipline of the economy. Thereafter, it presents the building blocks of historical economics: an abstract concept of the market as an information processor, reified notions of information and choice, and mathematics. All emanate from post-WWII economics, and the origins of the first three are traced to the twentieth-century struggle against collectivism and Marxism. Next, the thesis situates the construction of historical economics, a universalising and increasingly ahistoricist field, in the socio-historical context from 1950s onwards, emphasising important similarities with neoliberal thought and Friedrich Hayek. Two disciplinary developments are shown to be crucial. The first, cliometrics, is constituted by the direct use of neoclassical economics to study history. The second, new institutional economics (NIE) is a product of the 1970s. NIE claims to be more realistic and historical than neoclassical economics, but shares its naturalising impulses with the former. It is actually a more powerful tool of naturalisation because its framework allows the explanation of the social in terms of the economic. The transformations had profound implications for the understanding of capitalism. The theoretico-methodological framework ensures that historical economics projects aspects that are historically specific to capitalism onto non-capitalist historical contexts. Consequently, the latter is portrayed as qualitatively similar to the former in a way that re-embraces and refines the older commercialisation thesis: markets and private property are naturalised; relative price changes become the motor of history; and capitalism ‒ or a variant of its conceptual ‘place-holders’ ‒ is argued to only have alternatives that end in tragedy. Finally, the policy implications of naturalisation are assessed.
  • Kaskimäki, Jutta (2017)
    The principle of pacta sunt servanda has a long history in international treaty law. Its unwavering position as the fundamental principle governing contracts is undeniable. However, in the ever-changing world of international relations change should be regarded as an important part of treaty relations. This thesis will examine the relationship between treaty stability and change. It will examine the dichotomy these two notions have in international treaty law. The Vienna Convention on the Law of Treaties is the most important instrument governing treaties. This thesis will concentrate on two specific Articles of set Convention, Articles 61 and 62. These Articles contain rules on ‘supervening impossibility of performance’ and ‘fundamental change of circumstances’ which is also known by its Latin phrasing of rebus sic stantibus. These Articles will be discussed thoroughly in order to gain a deeper understanding on what they are and how they work. This thesis will introduce the notion of these two Articles as being an integral part of international treaty law, and the necessity for them to be accepted as legitimate grounds for terminating a treaty. The dichotomy of stability and change will be reflected upon the discussion of pacta sunt servanda and Articles 61 and 62 of the Vienna Convention on the Law of Treaties. The principle of rebus sic stantibus continues to have a troublesome character in international treaty law. It is considered to be the ‘terrible child’ of international law. Its essence and application are debated in judicial writings and international tribunals. This thesis will demonstrate the necessity of this principle in international treaty law. Because Articles 61 and 62 overlap and are so called ‘siblings’, it is important to examine them together. Small island States in the Pacific will serve as a concrete example of the need to accept these Articles as grounds for terminating a treaty. These microstates may find themselves in difficult situations because of climate change and the subsequent rise of sea levels. Because they are submerging into the ocean, they may confront situations that cause their treaty obligations to become unfair or impossible to oblige to. These situations could call for the application of Articles 61 and 62 of the Vienna Convention on the Law of Treaties. There needs to be a way to understand these Articles as completing the principle of pacta sunt servanda and evolving the law of treaties in a way that accommodates to change.
  • Toikkanen, Ilkka (2018)
    This Master's Thesis examines the compatibility of the European Union's present and future data protection law with connected traffic, which encompasses traffic technologies utilizing various electronic communication networks. The new forms of traffic enable services that, for instance, enhance safety, reduce pollution and make driving automatic. However, they require a constant flow of electronic communications data, most of which is considered personal data under the EU data protection law. This Thesis concentrates on scrutinizing consent as a lawful basis for processing of the location data used in the new traffic solutions. The main method of this study is the legal doctrinal method, and the future EU law is examined from a de lege ferenda perspective. First, the Thesis posits an answer for the research question “how do the present and future ePrivacy legal instruments of the EU regulate the legal basis for location data processing in the context of connected vehicles?”. After that, solutions are proposed for the second research question “how can these obligations be met in a way that would make connected traffic possible?”. The current data protection framework of the EU consists of the General Data Protection Regulation and the ePrivacy Directive. These legal instruments set consent as the sole applicable basis for the majority of location data processing in connected traffic. In the light of the opinions and proposals of the EU legislative institutions, the new ePrivacy Regulation, which is currently prepared, does not seem to change the situation. Under the GDPR, consent has to be freely given, specific, informed and unambiguous indication of the data subject's wishes. The forms of data processing essential for connected traffic, especially the communication taking place only between machines (M2M), do not enable acquiring consents from the drivers in accordance with the requirements of the GDPR. This Thesis aims at solving the consent issue by examining anonymization and pseudonymization, consent management services, M2M exemptions and alternative legal bases for data processing to consent. To enable the location data processing necessary for connected traffic, the EU legislators should utilize all these solutions in the new ePrivacy Regulation. The study posits that the protection of personal data and the confidentiality of communications can be integrated with the new traffic technologies by acknowledging the different purposes of data processing in connected traffic and defining a suitable legal basis and a level of regulation for each purpose. The new legal bases can be supported by the use of pseudonymization, the further study and utilization of consent management services and the introduction of a household exemption to enable the M2M communications taking place between vehicles.