Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by study line "General track"

Sort by: Order: Results:

  • Jämsä, Pentti (2021)
    The purpose of the thesis is to explore the link between labour markets and transportation accessibility. Accessibility has been estimated with logsum and travel times of public and private transportation. Logsum as a concept has been used in the transportation system estimation, all though all in all logsum as a measure of accessibility is not as established as travel times. This way the overview of the effects of accessibility to employment is thorough. I also evaluate how impact assessment of transportation is done historically and how wider economic impacts are considered in the framework. Cost-benefit analysis is widely used, but the framework should be improved on how to include wider economic impacts. The methods section goes through how transportation system estimation is done. The used data in the estimation is collected from three dependencies. These are the interview-based traveling habits from Helsingin Seudun Liikenne, Statistics Finland metropolitan regional employment statistics and travel times of public and private transportation from Helsinki Region Travel Time Matrix. In the results I go through the empirical model. Travel time, logsum and employment are all viewed as an elasticity measure. The results of the model are reasonable and in line with the previous literature, but the coefficient of determination ends up quite low. This means that the results still need confirmation, either through better data or better estimation model. My results can be viewed as a preliminary result on the link between employment and accessibility.
  • Kuokka, Karri (2021)
    Hinnoittelualgoritmien käyttö on yleistynyt viimeisen vuosikymmenien aikana. Tähän on vaikuttanut erityisesti sähköisten markkinapaikkojen eli verkkokauppojen suosion kasvu. Lisäksi algoritmeihin liittyvä teknologia on kehittynyt ja hinnoittelualgoritmeihin liittyviä palveluita on yhä helpommin yritysten saatavilla. Tässä tutkielmassa tarkastellaan hinnoittelupäätöksissä käytettävien algoritmien vaikutuksia markkinakäyttäytymiseen, missä useat yritykset koordinoivat toimiaan saadakseen kilpailullista markkinatilannetta korkeampia voittoja. Tällainen yritysten välinen kilpailun vastainen yhteistyö, eli kolluusio, on usein kuluttajien ja kilpailun kannalta haitallista. Siksi myös hinnoittelualgoritmien käytön vaikutuksia kolluusion on syytä tarkastella. Tämän tutkielman tavoitteena on tarkastella hinnoittelualgoritmien vaikutuksia erityisesti hiljaiseen kolluusioon. Tarkastelun kohteena on myös se, kykenevätkö tekoälyyn perustuvat algoritmit kolluusion autonomisesti eli täysin ilman ihmisten ohjausta. Tutkielman kirjallisuuskatsauksessa syvennytään tekoälyyn perustuviin algoritmeihin ja hiljaiseen kolluusioon teoriatasolla. Hinnoittelualgoritmien vaikutuksia hiljaiseen kolluusion on vaikea tutkia empiirisin menetelmin, koska hiljaista kolluusiota voi olla vaikeaa havaita markkinoilta ja yritykset harvoin paljastavat hinnoittelussaan käyttämiään algoritmeja. Tästä syystä tutkielman toisena tutkimusmenetelmänä on kokeellinen tietokonesimulaatio, jonka avulla tarkastellaan hinnoittelualgoritmien vaikutuksia hiljaiseen kolluusion. Simulaation avulla pyritään kokeellisesti selvittämään, kykenevätkö tekoälyyn perustuvat hinnoittelualgoritmit kolluusioon autonomisesti. Tutkielman tulosten mukaan hinnoittelualgoritmien käytöllä voi olla vaikutuksia hiljaiseen kolluusioon. Hinnoittelualgoritmien käyttö voi muuttaa markkinarakenteita vaikuttaen hiljaiseen kolluusioon. Vaikutukset markkinarakenteisiin ovat kuitenkin hieman epäselvät ja vaikutukset hiljaiseen kolluusioon voivat olla joko lisääviä tai vähentäviä. Lisäksi hinnoittelualgoritmit voivat toimia myös välillisesti hiljaisen kolluusion fasilitaattorina. Tutkielman tuloksista erityisen mielenkiintoinen on tekoälyyn perustuvien hinnoittelualgoritmien kyky oppia kolluusio autonomisesti. Tätä tulosta tuki aiempaan kirjallisuuteen perustuva kirjallisuuskatsaus sekä tässä tutkielmassa toteutettu tietokonesimulaatio. Yleistyvä hinnoittelualgoritmien käyttö ja niiden kehittyminen voivat aiheuttaa täysin uudenlaisia ongelmia kilpailun tehokkuuden turvaamisessa ja sääntelyssä. Kokeelliseen tutkimukseen perustuvien tuloksien mukaan tekoälyyn perustuvat algoritmit näyttäisivät kykenevän autonomisesti kolluusioon. Jatkossa tutkimusta hinnoittelualgoritmien vaikutuksista kolluusioon olisi syytä laajentaa. Haasteeksi voi kuitenkin muodostua se, miten tutkimusta voidaan toteuttaa sellaisessa taloudellisessa ympäristössä, joka vastaa riittävän tarkasti todellista markkinatilannetta.
  • Kanervo, Atte Jonatan (2021)
    This thesis investigates the tax base and allocation choices in international corporate income tax architecture and provides an evaluation of the effects of the choices made in three different systems: the current system, residual profit allocation, and OECD Pillar One. International corporate income tax design has a significant effect on the functioning of the international economy and on the welfare of individuals. Thus, making the correct design choices is extremely important. This thesis argues that the international corporate income tax system should be designed following certain important principles of taxation: 1) fairness, 2) economic efficiency, 3) robustness to avoidance, 4) administrative ease, and 5) incentive compatibility. The different systems are then introduced in turn and evaluated against these criteria. The thesis finds that the current system suffers from certain conceptual weaknesses that leave significant room for improvement with regards to the set criteria. It is further argued that a reform is required for the continued functioning of the international system. Such a reform could be introduced in the form of residual profit allocation. OECD Pillar One proposal involves elements of residual profit allocation, but in comparing the different systems with each other, this thesis argues that the OECD proposal is too narrow in scope to gain the full benefits of a residual profit allocation system.
  • Nevanlinna, Kimmo (2022)
    Tiivistelmä Tiedekunta: Valtiotieteellinen tiedekunta Koulutusohjelma: Taloustieteen maisteriohjelma Opintosuunta: Taloustieteen yleinen opintosuunta Tekijä: Kimmo Nevanlinna Työn nimi: Älysopimusalustojen hinnoittelun tehokkuus Työn laji: Maisterintutkielma Kuukausi ja vuosi: 11/2022 Sivumäärä: 48 Avainsanat: Lohkoketju, satunnaiskävely, kryptovaluutta, älysopimus Ohjaaja tai ohjaajat: Jani Luoto Säilytyspaikka: Helsingin yliopiston kirjasto Muita tietoja: Tiivistelmä: Maisterintutkielma tutkii kolmen älysopimuksiin erikoistuneen kryptovaluutan Ethereumin, Cosmoksen ja Tezosksen hinnan satunnaiskävelyn hypoteesia. Kryptovaluutat ja lohkoketjut ovat nousseet julkiseen keskusteluun viime vuosina, kun niiden hinnat ovat vaihdelleet villisti ja niiden hintojen muutoksia seurataan nykyään suurissa kansainvälisissä ja kotimaisissa talousuutisissa, mutta ymmärrys hintojen muutosten takana on vajavaista. Kryptovaluutoista tunnetuin Bitcoin sai alkunsa vuonna 2009, mutta viimeisen viiden vuoden aikana älysopimukset ovat nousseet myös julkiseen keskusteluun. Kryptovaluuttoja pidetään kuitenkin yleisessä keskustelussa keskenään samanlaisina eikä niiden eroja ymmärretä tarpeeksi hyvin. Kiinnostukseni älysopimuksiin ja niiden mahdollisuuksiin syttyi vuonna 2018 ja se kiinnostus sai minut tutkimaan asiaa taloustieteen näkökulmasta maisterintutkielman verran. Tämä maisterintutkielma selittää mitä älysopimukset ovat ja miten niiden käyttö lisää sen kryptovaluutan kysyntää, jonka päällä älysopimus suoritetaan. Tutkielmassa käydään myös läpi mitä lohkoketjut ovat. Empiirisessä osuudessa käytetään metodina autokorrelaation tutkimista tuottoaikasarjasta. Tehokkaassa hinnan muodostuksessa ei pitäisi löytyä autokorrelaatiota. Autokorrelaatio mitataan Ljung-Boxin menetelmällä päivittäisistä tuotoista 1.1.2019-31.6.2021. Toisena menetelmänä käytetään yksikköjuuren estimointia. Tämä tutkitaan Dick-Fullerin testillä. Yksikköjuuri löydetään kahdesta suuremmasta kryptovaluutasta Ethereumista ja Cosmoksesta, mutta ei pienemmästä Tezosksesta. Lopputuloksena todetaan, että isommat kryptovaluutat vaikuttavat olevan jotakuinkin tehokkaat satunnaiskävelyn osalta, mutta kaikki kolme kryptovaluuttojen tuottoaikasarjaa sisältävät autokorrelaatiota.
  • Kurppa, Sara (2021)
    Työvoiman riittävyys on yksi tulevaisuuden suurimmista poliittisista kysymyksistä länsimaissa. Tähän pyritään löytämään ratkaisuja muun muassa pidentämällä nykyisten työntekijöiden työuria, kuitenkaan lisäämättä työkyvyttömyyseläkkeiden aiheuttamia kustannuksia. Yhtenä mahdollisena ratkaisuna pidetään ammatillisen kuntoutuksen tarjoamia työhön paluu mahdollisuuksia työkokeilun tai uudelleen kouluttautumisen avulla terveydentilalle sopivampaan ammattiin. Tutkimuksen alussa käydään tarkemmin läpi ammatillista kuntoutusta, sen prosessia, kohderyhmää sekä mahdollisia kuntoutuskeinoja. Tavoitteena on selvittää henkilötasoisten muuttujien vaikutusta ammatilliseen kuntoutukseen osallistumiseen. Lisäksi on tarkoitus selvittää, onko osallistumista mahdollista ennustaa jo hakemusvaiheessa. Aiemmat tutkimukset ovat osoittaneet iän, sukupuolen, koulutustausta ja sairauspoissaolojen lukumäärän olevan merkittäviä muuttujia tätä tutkittaessa. Tutkimuksen aineistona on yhden eläkelaitoksen vuosina 2016-2019 antamat ammatillisen kuntoutuksen ennakkopäätökset. Tutkimusmenetelminä käytetään khiin-neliötestiä, logistista regressiota sekä satunnaismetsää. Logistisen regression tuloksien mukaan tärkeimpiä ammatilliseen kuntoutukseen osallistumiseen vaikuttavia muuttujia ovat henkilön sukupuoli, kuntoutusrahan määrä, tietyt ammatit sekä vuosiansioiden määrä viimeisen viiden vuoden aikana ennen kuntoutusoikeuspäätöksen saamista. Ennustemalliksi luodun satunnaismetsän tulosten perusteella ammatilliseen kuntoutukseen osallistumista on mahdollista ennustaa. Positiivisen luokan ennustustarkkuus on selvästi parempi kuin negatiivisen luokan. Khiin-neliötestin, logistisen regression ja satunnaismetsän tuloksissa on kuitenkin myös toisistaan eroavia tuloksia.
  • Peltonen, Henri (2024)
    Banking crises have been found to cause significant fiscal and real costs for the economy. For this reason, macroprudential policymakers have developed various analytical models for predicting new banking crises ahead of time. With this information policymakers can undertake targeted countermeasures to reduce the negative impacts. In the prediction exercise binary regression models (especially logit) have been the main analytical tool for long. However, due to the complex dynamics and rare occurrences, accurate crisis prediction remains a difficult task for these models. In line with the recent developments in technology and artificial intelligence, scholars have started investigating the possibilities of using machine learning methods in banking crisis prediction. Despite the promise of more flexible distributional assumptions and enhanced modeling of non-linear relationships, the early results on predictive performance have been mixed. One explanation for this could be the large variety of models and empirical setups that different authors have used. As a result, it remains unclear whether the results are driven by changes in the underlying empirical setups, or the superiority of the machine learning models themselves. To investigate this problem, this thesis collects out-of-sample prediction results from eleven banking crisis papers published between 2017 and 2023. After implementing a normalization procedure to enhance comparability between the papers, the results are pooled for analysis to gain insights into which machine learning models perform the best. Additional robustness checks are also carried out to investigate the stability of the results. This thesis makes two main contributions to the literature. The first one is finding systematic differences in predictive performance between machine learning models. Neural network, random forest and boosted/bagged tree models have on average delivered the best predictive performance in comparison to logit models. In contrast, k-nearest-neighbors, decision tree and support vector machine models consistently underperform the logit benchmarks. The second contribution is creating novel connections between the banking crisis and machine learning literatures. The empirical results obtained in this thesis are contrasted and found to be aligned with the machine learning literature. In addition, a critical review of the practical implications resulting from the use of machine learning is conducted. Issues with interpretability, modeling and class-imbalances are highlighted.
  • Jokivuori, Jessica (2023)
    Taxation is a critical tool for development, as well-designed tax systems can generate greater revenues to fund public goods and investments that drive productivity. However, lower-income countries often raise only a small percentage of their gross domestic product (GDP) in taxes compared to higher-income countries. The gap in revenue is particularly striking for property taxes. This thesis begins with a literature review on taxation in developing countries and property taxation in general, emphasizing the challenges of property taxation in developing countries. It then provides relevant background information on Kenya, discussing inequality, general taxation, and property taxation. Using survey data from the Kenya Integrated Household Budget Survey 2015-2016, the thesis then investigates the distribution of property value and income among Kenyan households. It then explores the potential revenue and distributional effects of two property taxation models while also considering potential revenue loss from taxing households with insufficient income or savings. The trends between household income and property value indicate that higher annual incomes correlate with higher property values and vice versa. However, low-income individuals also owned higher-value properties, leading to liquidity problems. A 2% linear property tax rate model revealed a heavy tax burden on lower-income households and an overall revenue potential of 2.38% of total survey income or 0.94% of 2016 GDP. The study observed significant decreases in revenue potential when adjusting revenue to account for the ability to pay. This research also modeled a property tax rate payable only upon reaching a specific income threshold to address liquidity problems. In this model, the tax burden shifted to higher-income households with an overall potential tax revenue of 1.91% of total survey income or 0.76% of GDP. In conclusion, the observed trends, such as the high prevalence of inability to pay, relatively low revenue potential, and the administrative effort required for property taxation, suggest that reforming property taxation may not be the most practical approach for increasing revenue in Kenya.
  • Tossavainen, Tuuli (2021)
    Asymmetric information in insurance markets is the result of policyholders, the buyers of insurance, having more information about their own risk types and preferences than the insurer. Informational asymmetry between the insurer and policyholders can lead to non-optimal insurance prices and quantities which reduce market efficiency. While the presence of asymmetric information has been widely studied in several insurance markets, it has not been empirically studied in the Finnish automobile insurance market before. This thesis aims to fill this gap in literature. The Finnish automobile insurance market consists of two types of insurance. Motor liability insurance is required by law from all vehicles used for driving in traffic. Also, voluntary automobile insurance can be acquired in addition to the mandatory motor liability insurance. In this thesis, the presence of asymmetric information is studied by comparing the occurrence of motor liability insurance claims, conditioned with the pricing variables used by the insurer, between policyholders who only have a motor liability insurance policy and policyholders with an additional automobile insurance policy. The data set used in this thesis is from a single Finnish insurance company. The data set is from the year 2019 and it contains nearly 105,000 motor liability insurance policies. The data include all variables observed by the insurer. Several regression specifications and the widely used positive correlation test are used in this thesis to study the correlation between insurance coverage and motor liability insurance claims. The results of this thesis suggest that signs of asymmetric information are not present at aggregate level in the Finnish automobile insurance market in question. However, different subgroups of policyholders show signs of asymmetric information: After controlling for the pricing variables, policyholders with an automobile insurance policy with the largest coverage show a positive correlation between buying automobile insurance and motor liability insurance claims whereas policyholders with an automobile insurance policy with the third largest coverage show a negative coverage-claims correlation. However, the results from different regression specifications regarding different automobile insurance coverages were not unanimous and thus the results are left ambiguous. In addition, new policyholders considered as experienced drivers show a negative correlation between motor liability insurance claims and having automobile insurance coverage. On the contrary, policyholders considered as experienced drivers with 1–2 years of company experience do not show signs of asymmetric information. The result suggests that the insurer learns from its repeat customers as signs of informational asymmetry disappear over time. Moreover, policyholders considered as unexperienced drivers do not show signs of asymmetric information regardless of the length of their customership in the firm. The results are in line with previous research.
  • Sjöholm, Tobias (2023)
    Personalized pricing as a pricing strategy has become possible as a result of technological advancement. Personalized pricing uses data to determine prices that differ from uniform pricing and as a result, welfare effects change. This master thesis uses a two period oligopoly model to analyze welfare effects of personalized pricing and then applies modifications to the model to account for EU Regulations. The model finds that regulation helps mitigate negative welfare effects by reducing the amount of inefficient switching and a price ceiling helps to reduce the appropriation effect for a consumer with a high willingness to pay. A case study is used to illustrate a use case for an oligopolistic market to bring real-world context to the theoretical model. The research question is important, because it increases awareness about the effects of regulation on personalized pricing in the European union internal market.
  • Tiililä, Nea (2019)
    The regulatory framework for financial regulation has developed much in the Europe after the financial crisis. The use of borrower based macroprudential instruments as regulatory tools has become popular among the European Economic Area -countries. Already 21 out of 31 EEA-countries have at least one borrower based macroprudential instrument in use. The most commonly used borrower based instruments are Loan to Value (LTV) limit, Loan to Income (LTI) limit, Debt to Income (DTI) limit, Loan Service to Income (LSTI) limit, Debt Service to Income (DSTI) limit, amortisation requirement and maturity limit. As these instruments are only recently introduced as regulatory tools in Europe, their effectiveness and transmission channels are still under discussion. The aim of this master's thesis is to contribute to the ongoing discussion of the effectiveness of the instruments. This thesis provides a broad literature review in order to understand the transmission of each of the borrower based instruments and to explore previous findings of the impacts of the instruments. Further, an empirical analysis is formed by using a panel vector autoregression (PVAR) model in order to study whether borrower based macroprudential instruments have any effect on housing market stability and real economy in the Europe. The data that is used to answer this question consists of growth rates of mortgage stock, house price index, construction index, household consumption and GDP. According to the literature review, the borrower based macroprudential instruments function through four different transmission channels. These are the credit demand channel, expectations channel, resilience channel and anti-default channel. The empirical analysis provides evidence that tightening the borrower based instruments reduces mortgage growth. House prices react negatively to a policy shock in the short run but positively in the long run. Construction reacts negatively to a policy shock. Household consumption on its behalf responds to a policy shock positively in the short run but negatively in the long run. Finally, GDP responds to a policy shock negatively. However, the result concerning construction growth is the only one which is statistically significant in a 95% confidence level and all the other results lack statistical significance. Overall, the empirical results of this thesis provide slight evidence that regulating borrower based macroprudential instruments restrain the growth of mortgage stock, which for its part should enhance the stability of housing markets in Europe. Further, the impact on economic growth is likely negative. However, the results are not statistically significant in a 95% significance level. The difficulties in fitting the model and the lack of significance may implicate that the chosen model might not be the most suitable one for studying the efficiency of borrower based macroprudential instruments.
  • Pedro, Gomes Santos (2022)
    The prevailing volatility of the price/spread related to catastrophe risk around this newly innovative type of instrument, called CAT bond, gave light to this literature. Contrarily to normal type of insurance coverage risks (such as cars, houses, etc...) risk associated to natural and human catastrophes is more unpredictable and costly for (re)insurance companies. Insurance and reinsurance companies found a way to finance this expensive risk by shifting it to investors through Insurance-Linked Securities (ILS), more precisely and successfully, CAT bonds. By cross-checking data and information from multitude of sources, I investigated which are the main determinants capable to influence the price, spread or coupon of a catastrophe bond on the primary market for those instruments. This paper gathers data of 284 catastrophe bonds issued in the market between January 2013 and October 2021 provided by Artemis deal directory. My research contains an introduction part on those innovative type of bonds, an overview on previous research regarding the question and their results, and some empirical data on the main goal of this work, which is defining what variables influence the price of the CAT bond in the primary market. OLS regressions techniques with heteroskedasticity and autocorrelation consistent standard errors are mainly used based on multifactor based models in order to identify the main determinants of the price. The work of Alexander Braun will be the main inspiration for this work, I will apply a couple of same techniques on my work, according to the data available and Stata limitations. The outcome of the larger model including the whole set of variables and crossed variables shows that the expected loss is the major influencers of the catastrophe risk prices for both the in-sample and out-of-sample estimation and across diversified subsamples and models. As per the conclusion from previous researchers, the expected loss variable has shown to impact positively the price of the coupon bond much more than any other variable.
  • Walta, Veikko (2020)
    The determinants of FDI have been a topic of interest in economics since the 1980s and this paper aims to contribute to this field. This study aims to measure how associated FDI is with the political risk as well as to see the extent of this relationship in Turkey in the years 1996–2017. The political risk is measured as a change in indexes that are provided by the World Bank, Freedom House, and Transparency International. These political indicators are Political Rights, Civil Liberties, the Corruption Perceptions Index, Regulatory Quality, Voice and Accountability, Rule of Law, Government Effectiveness, Control of Corruption, and Political Stability. The earlier literature on FDI and political risks is mostly empirical and there has not been much theoretical research. Chakrabarti analyzed the past studies on FDI and its determinants in 2001 and found out that in the earlier research, almost every explanatory variable of FDI except the market size was sensitive to small changes in the conditioning information set, casting doubt on the robustness of the results. There have also been conducted studies that address political risk or equivalent concepts. The 2005 research of Busse and Hefeker had the same topic as this paper but their data consisted of many countries and they employed two different panel models. One was a fixed-effects panel analysis while the other utilized a generalized method of moments estimator. I selected three model specifications for the time-series regression analysis. All three specifications have market size as a control variable and the other two also have the economy’s growth rate and trade openness. The third has the inflation rate as the final control variable. The data have a small number of observations which limits the options available for the empirical part of the study. Out of the nine political indicators, Regulatory Quality is the only political indicator that is not associated with FDI, while the results on the Corruption Perceptions Index and Control of Corruption are inconclusive. The rest six are associated with FDI. The Rule of Law index has the highest estimated coefficient value of the World Bank indicators and the Political Rights index has the highest estimated coefficient value of the Freedom House’s indicators.
  • Rissanen, Julius (2021)
    Abstract Faculty: Faculty of Social Sciences Program: Economics Study track: General Track Author: Julius Vili Henrik Rissanen Title: Comparing cost-effectiveness of short-term and long-term psychodynamic psychotherapies focusing on patients with depressive disorder and their work ability during a 5-year follow-up. Level: Master’s Thesis Month and Year: November 2021 Number of Pages: Keywords: Psychotherapy; cost-effectiveness; Work Ability; psychodynamic; randomized trial; Instructors: Roope Uusitalo, Lauri Sääksvuori, Costanza Biavaschi, Olavi Lindfors Deposited at: Helsingin Yliopiston kirjasto Other information: Abstract: Background: Mental health disorders pose significant burden to the society, for example, because of decreased work ability. Psychotherapy as one of the most important treatment methods also causes significant costs for the healthcare system. Putting effort into cost-effectiveness between the different therapy types can help promote better targeting of treatments and economic efficiency in society. Aims: Explore cost-effectiveness in improving work ability between short-term and long-term psychodynamic psychotherapy in patients with depression. Methods: The 192 depressive patients randomized to two psychotherapies of different lengths in the Helsinki Psychotherapy Study were measured in baseline and annually for five years. Work Ability Index (WAI) and Global Assessment of Functioning (GAF) as an effectiveness outcome measures were compared to the total direct costs with incremental cost-effectiveness ratios (ICER) between the treatments. Results: The total direct cost of short-term psychodynamic psychotherapy (SPP; €7,087) was significantly lower than for long-term psychodynamic psychotherapy (LPP; €19,959). The biggest explanatory factor between the cost of the treatments was protocol study therapy costs (SPP €1304; LPP €16,715). In addition, those randomized to the SPP had significant costs during the follow-up from the non-protocol auxiliary psychotherapy treatments (€5142) which were more than fives times compared to the LPP. All of these cost differences between the treatment groups were statistically significant. Psychotropic medication and outpatient care each averaged below €2000, and the differences weren’t statistically significant. Psychiatric hospitalization during the follow-up was rare but yielded significant costs to the associated patients. Differences of effectiveness between the treatment groups on the work ability were not statistically significant. The incremental cost-effectiveness ratio was highly unstable due to small differences in efficiency, but large differences in cost. Conclusions: The study found a clear difference in cost in favour of SPP without losing in the effectiveness of the treatment. However, patients in the SPP used a significant amount of non-protocol auxiliary psychotherapy treatments which may be an indication of insufficient therapy treatment. The absence of difference in the effectiveness can be thus attributed to the widespread utilization of additional treatments in the SPP. Going forward, expanding the study to account for the impact of patient’s suitability to the treatment, particularly in understanding SPP cost-effectiveness, would be worthwhile.
  • Ahonen, Elena Venla Maria (2017)
    The aim of this thesis is to demonstrate the importance of selecting feasible and, preferably data-based prior assumptions for the Bayesian time-varying coefficient vector autoregressive model (TVC VAR model for further reference) of Primiceri (2005) and Del Negro and Primiceri (2015). The TVC VAR model would be suitable for quantifying, for example, the impacts of different monetary policy or fiscal policy regimes. The biggest advantage of the TVC VAR model is that it takes into account both changes in economic policy and in the private sector behaviour. The latter feature makes the model very compelling to use, because the private sector plays an important role in facilitating mote stable change in monetary and fiscal policy regimes. In complex mathematical models, such as the TVC VAR model, the objectiveness of the model may be compromised by deliberate selection of parameters. The TVC VAR model uses the Bayesian approach, which means that the researcher’s choice for the prior assumptions for the model plays an important role in the estimation. Unfortunately, Primiceri’s (2005) approach for selecting hyperparameters for the model is poorly explained and difficult to follow. Given that the model depends only for a small number of hyperparameters, it might be possible that the model can be tuned in a predefined way. To investigate whether the TVC VAR model can be tuned according to a researcher’s preferences, I design a proof of concept approach for optimising the hyperparameters of the model according to a set of predefined results. In other words, my research question is: could one tune the TVC VAR model to produce results according to the researcher’s bias? In my proof of concept approach I tune the TVC VAR model for six different targets for the Finnish government consumption multiplier. Given that Finland is a small open economy, Primiceri’s (2005) original hyperparameter values for the United States are not feasible and other values have to be used. The results from my proof of concept analysis show that the TVC VAR model can be tuned for predefined results, which shows that the practical reliability of the model can be easily compromised. My findings highlight the need for a comprehensible, data-based approach for selecting the hyperparameters for the model.
  • Kuivaniemi, Esa (2024)
    Machine Learning (ML) has experienced significant growth, fuelled by the surge in big data. Organizations leverage ML techniques to take advantage of the data. So far, the focus has predominantly been on increasing the value by developing ML algorithms. Another option would be to optimize resource consumption to reach cost optimality. This thesis contributes to cost optimality by identifying and testing frameworks that enable organizations to make informed decisions on cost-effective cloud infrastructure while designing and developing ML workflows. The two frameworks we introduce to model Cost Optimality are: "Cost Optimal Query Processing in the Cloud" for data pipelines and "PALEO" for ML model training pipelines. The latter focuses on estimating the training time needed to train a Neural Net, while the first one is more generic in assessing cost-optimal cloud setup for query processing. Through the literature review, we show that it is critical to consider both the data and ML training aspects when designing a cost-optimal ML workflow. Our results indicate that the frameworks provide accurate estimates about cost-optimal hardware configuration in the cloud for ML workflow. There are deviations when we dive into the details: our chosen version of the Cost Optimal Model does not consider the impact of larger memory. Also, the frameworks do not provide accurate execution time estimates: PALEO estimates our accelerated EC2 instance to execute the training workload with half of the time it took. However, the purpose of the study was not to provide accurate execution or cost estimates, but we aimed to see if the frameworks estimate the cost-optimal cloud infrastructure setup among the five EC2 instances that we chose to execute our three different workloads.
  • Virtanen, Emil (2021)
    Tämän maisterintutkielman tavoitteena on tarkastella sitä, miten markkinakriisi vaikuttaa sijoittajien käyttäytymiseen ja dispositioefektin ilmenevyyteen ja sen muutoksiin. Aineisto on kerätty COVID-19-pandemian aikana ja koostuu suomalaisista sijoittajista. Tutkimuksessa pyritään lisäksi löytämään demografisia muuttujia, joilla dispositioefektin ilmenevyyttä voitaisiin selittää. Tutkimuksessa tuloksille luodaan viitekehys aiemmista empiirisistä tutkimuksista ja teorioista, joilla dispositioefektiä on havainnollistettu ja selitetty. Kirjallisuuskatsauksen jälkeen tutkimuksessa siirrytään empiiriseen osioon, jossa dispositioefektiä ja sen muutosta mitataan suomalaisista yksityissijoittajista koostuvan aineiston pohjalta. Aineisto koostuu kahdesta osasta, joista ensimmäinen pitää sisällään suomalaisten sijoittajien transaktioita ja toinen Helsingin pörssissä listattujen osakkeiden hintatietoja vuosilta 2017-2021. Dispositioefektin voimakkuuden laskemiseksi käytetään mallia, jossa verrataan sijoittajan realisoimattomien ja realisoitujen osakkeiden markkinahintojen suhdetta. Dispositioefektin muutosta COVID-19-kriisin aikana tutkitaan aikasarja-analyysilla, ja demografisten muuttujien yhteyttä dispositioefektin suuntaan ja voimakkuuteen puolestaan regressioanalyysilla. Tutkimuksen keskeiset tulokset osoittavat, että sijoittajat kärsivät dispositioefektistä. Tämä tutkimustulos tukee aiempia tutkimuksia. Aikasarja-analyysin tulokset indikoivat, että sijoittajien reaktio COVID-19-kriisiin vähentää dispositioefektin määrää ja että sijoittajat ovat halukkaampia realisoimaan myös tappioitaan COVID-19-kriisin alkamisen jälkeen. Tulokset tukevat aiempia tutkimuksia, joiden mukaan markkinan tilalla on vaikutusta sijoittajien dispositioefektin voimakkuuteen, ja teorioita, joiden mukaan sijoittajat ovat halukkaita realisoimaan tappioita, kun he olettavat markkinahintojen jatkavan laskuaan. Regressioanalyysin tulokset osoittavat naisten kärsivän miehiä voimakkaammasta dispositioefektistä, mikä on myös linjassa aiempien tutkimustulosten kanssa. Toisin kuin aiemmissa tutkimuksissa, tässä tutkimuksessa iällä ei havaittu olevan vaikutusta dispositioefektin ilmenevyyteen tai voimakkuuteen. Dispositioefekti on hyvin tunnettu ja empiirisesti koeteltu sijoittajilla havaittu käyttäytymisharha. Tämän tutkielman tulokset tukevat aiempaa tutkimusta ja antavat uutta tietoa siitä, kuinka sijoittajat reagoivat globaaliin markkinakriisiin ja siitä, miten dispositioefektin ilmenevyys ja voimakkuus muuttuvat sijoittajien kohdatessa markkinakriisin, jonka ominaispiirteitä ovat pelko ja epävarmuus.
  • Vesala, Lauri (2023)
    Carbon pricing is a cost-effective instrument of climate change mitigation policy. Its implementation is, however, limited by various political constraints. The goal of this thesis is to examine what factors empirically explain cross-country variation in carbon pricing policy. Understanding the political constraints limiting carbon pricing may have implications for policy design. Previous literature on the empirical determinants of carbon pricing policy has focused mostly on determinants based on political economy theory, such as variation in domestic interests, and been conducted with data only on explicit carbon pricing. Implicit carbon prices created by fuel excise taxes are, however, a significant part of the total price on emissions. This thesis contributes to existing literature by introducing two new determinants in public finance considerations and country-level social cost of carbon as well as utilizing broader carbon pricing data. Empirical methods used include regression based on maximum-likelihood estimation of censored data and multiple linear regression. The size of the public sector is found to have a statistically significant positive association with carbon pricing regardless of the model used. This supports the hypothesis that cross-country variation in carbon pricing is empirically explained by a need to finance public spending and by the double-dividend hypothesis. Other factors that are found to have a clear positive association with carbon pricing are level of democracy, administrative capacity, and GDP per capita. The results are somewhat mixed concerning the effect of other political institutions related factors as well as factors related to carbon intensity. The hypothesis that country-level social cost of carbon positively affects carbon pricing is clearly rebuked which suggests that a competitive game does not describe national-level policymakers’ decision-making. The results of the thesis should not, however, be interpreted as causal because of omitted variable bias, reverse causality, and a lack of time-series data.
  • Liukkonen, Sini (2020)
    High growth enterprises are important contributors to the aggregate economy but not much is known of their dynamics. Based on previous literature it is quite clear that their growth is usually not very persistent. The purpose of this study is to find the enterprise characteristics that positively affect the length of the growth period. In this study, extensive micro data from Statistics Finland is used. The data comprises of information from business register, financial statements, foreign trade, ownership and employee registers. Survival analysis methods are used to get information on the effects of different enterprise characteristic. The models account for the time-varying nature of the covariates and the coefficients. Based on the results, it is found that many of the characteristics have time-varying effects and the effects are not the same for all size classes of enterprises. It is quite clear though that access to foreign markets and innovativeness are important positive factors to the length of the growth period whereas size and age have negative effects. Survival analysis methods seem to fit quite well to this framework and they seem to produce robust results.
  • Kurki, Jaakko (2019)
    Wage discrimination occurs when employees of equal productivity receive different wages due to characteristics such as ethnicity, sex, or nationality, which do not affect their productivity directly. One of the common challenges in empirical research on wages has always been the challenge of determining individual employees` productivity. Professional sports leagues such as NBA (National Basketball Association) provide an ideal setting for the study of salary discrimination, as the salaries, players` backgrounds, and different statistical measures of players` performance throughout their whole careers are available publicly. Therefore, economists have used professional sports leagues when studying salary discrimination by ethnicity or nationality. The objective of this research is to find out whether salary discrimination by nationality occurs in the NBA during the period between 2016 and 2018. The research consists of a literature review that introduces previous findings on salary discrimination by nationality in the NBA, and an empirical part which aim is to find out whether this discrimination still occurs in the 2016 – 2018. The dataset of this thesis consists of statistics that measure NBA players' on-court performance and salary during the 2016 – 2017 and 2017 – 2018 seasons, as well as their nationality, and physical attributes. The empirical analysis is carried out using linear regression-analysis, which has been a standard in previous researches on salary discrimination by nationality in the NBA. Moreover, this study applies Blinder-Oaxaca decomposition, which is one of the standard tools used in salary discrimination studies in general. The statistical analysis of this study does not find discrimination by nationality against either foreign or domestic born NBA-players during our sample period. Nevertheless, foreign players earn, on average, around USD 500,000.00 higher annual salaries than their American contemporaries. However, according to our analysis, this difference is explained by foreign players' on-court performance rather than their nationality. Some previous researches find that foreign players from large economic markets receive sizeable salary premiums due to marketing possibilities in their home countries. However, this study does not find the market size of a player's home country to have a statistically significant effect on their salaries. The earliest literature on salary discrimination by nationality in the NBA dates back to the 1990s. Over the years, the results of previous researches have varied between foreign or domestic players being discriminated against by nationality. However, as different tools for statistical analysis on player performance have improved drastically and basketball has indeed become a global sport over the years, it seems that discrimination by nationality does not occur in the NBA anymore in 2019.
  • Suihkonen, Sini (2023)
    The importance of protecting sensitive data from information breaches has increased in recent years due to companies and other institutions gathering massive datasets about their customers, including personally identifiable information. Differential privacy is one of the state-of-the-art methods for providing provable privacy to these datasets, protecting them from adversarial attacks. This thesis focuses on studying existing differentially private random forest (DPRF) algorithms, comparing them, and constructing a version of the DPRF algorithm based on these algorithms. Twelve articles from the late 2000s to 2022, each implementing a version of the DPRF algorithm, are included in the review of previous work. The created algorithm, called DPRF_thesis , uses a privatized median as a method for splitting internal nodes of the decision trees. The class counts of the leaf-nodes are made with the exponential mechanism. Tests on the DPRF_thesis algorithm were run on three binary classification UCI datasets, and the accuracy results were mostly comparable with the two existing DPRF algorithms DPRF_thesis was compared to. ACM Computing Classification System (CCS): Computing methodologies → Machine learning → Machine learning approaches → Classification and regression trees Security and privacy → Database and storage security → Data anonymization and sanitization