Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by study line "General track"

Sort by: Order: Results:

  • Saarinen, Sofia (2022)
    The first wave of basic income experiments took place in North America between the 1960s and 1980s, and the second wave started in the early 2000s and is still ongoing globally. Experiments are used to gain knowledge about effects that basic income has on labour supply, poverty, and welfare. This thesis is a literature review that utilises labour supply theory to examine labour supply effects that are observed when experimenting basic income and negative income tax policies with randomised controlled trials. The purpose is to find possible causes of differing results with the help of labour supply theory and by carefully studying the characteristics of the experiments. The topic is relevant as experimental results can be used when making decisions about future welfare reforms. Previous research on basic income has been done by Banerjee, Niehaus, and Suri (2019), Ghatak and Maniquet (2019), and Hoynes and Rothstein (2019), who find that the implementation of a basic income reduces labour supply more in advanced countries than they do in developing countries. Robins (1985) and Hum and Simpson (1993) find that negative income tax experiments resulted in labour supply reductions. On the other hand, other authors find that a basic income decreases labour supply only for those who are generally supposed to work less: the elderly, those with disabilities or illnesses, mothers of young children, and children (De Paz-Báñez, Asensio-Coto, Sánchez-López, & Aceytuno, 2020). I conclude that a universal basic income redirects labour supply in developing countries from wage labour to self-employment. However, exact increases or decreases in total labour supply are unknown. Negative income taxes decrease labour supply by few weeks of full-time employment per year. Although there does not seem to be a clear causal effect between the variation of experimental design and the results reported by the 21st century experiments, factors such as the structure of the research sample, geographical area, type of poverty trap, and size of the increase in living standards created by basic income may affect the beneficiaries’ labour supply.
  • Lindfors, Teppo (2020)
    As of the 1980s, global poverty has witnessed a serious reduction. In numerous occasions, the reduction in poverty has been connected to an agrarian reform. A land reform is a type of agrarian reform which involves redistribution of land or changes in the legal framework for land administration. A large body of empirical studies have found that land reforms have proven to be a prominent tool in alleviating poverty. In this thesis, I examine the economic outcomes of the Finnish land reform of 1918. The reform enabled tenant farmers, which covered around half of the rural population, to buy their farms with a fraction of the market price. As my identification strategy, I use instrumental variables analysis, exploiting arguably exogenous variation in the regional distribution of tenants. I employ municipal level data from decennial agricultural censuses from 1910 to 1941. I find that the land reform increased capital intensity by around 23% in the two subsequent decades, which correspond to over third of the overall increase. Using a plain stochastic output model, I evaluate that this would signify a 14% increase in output at the farm level. Furthermore, I compute that the reform accelerated the structural transformation of agriculture toward dairy farming by 10 years. These effects are robust to controlling various municipal characteristics, such as natural conditions, population density and wealth. To confirm that the analysis does not simply capture dissimilarities in pre-reform development, I report baseline differences in municipal characteristics by regressing outcomes on the proportion of tenants with a cross-section for 1910. These findings question the traditional view that the Finnish land reform regressed progress in agriculture. They are in line with the evidence on economic benefits of land reforms. As a novel contribution, this thesis is able to show that the effects are persistent. The exact mechanism driving the results could not be distinguished. I suspect, that the causal channel operated either through the farmers' improved incentives or an access to collateralizable assets, both dependent on property rights.
  • Lahdenkauppi, Lina-Lotta (2021)
    This thesis studies lifetime earnings inequality in Finland using a unique dataset based on administrative data from the Finnish Centre for Pensions. I analyse intragenerational lifetime earnings, their distribution and mobility of individual’s in the earnings distribution over the life cycle to determine whether Finnish cohorts are becoming more or less equal based on their lifetime earnings. In addition, I examine the association between current and lifetime earnings over the life cycle. The analysis includes nine cohorts born every five years between 1940 and 1980. Altogether 4 140 individuals are included in the analysis (n=4 140). However, since the cohort-specific sample sizes are extremely small against sample sizes used in present-day research, results have to be interpreted with caution. Results concerning the evolution of intragenerational lifetime earnings inequality differ between men and women. Results for men indicate that no clear increasing or decreasing trend can be declared over the evolution of intragenerational lifetime earnings inequality. However, the Gini coefficients of lifetime earnings defined up-to age-39 suggest that the 1970 and 1975 cohorts are experiencing higher levels of inequality than the older cohorts born in the 1950s. Findings for women, in contrast, imply a decreasing trend in the evolution of intragenerational lifetime earnings inequality between successive cohorts. Based on the results, younger female cohorts are experiencing less intragenerational lifetime earnings inequality than older cohorts. Findings concerning the association between current and lifetime earnings demonstrate that annual and lifetime earnings are highly correlated between ages 40 and 50 for men. Likewise, annual and lifetime earnings are highly correlated between ages 45 and 55 for women. Annual earnings can be considered a good proxy for lifetime earnings between these age brackets for men and women respectively.
  • Hollming, Patrik (2022)
    Early literature focused solely on risk’s role in asset pricing. Involving liquidity helps explain unexplained observations from the market, such as, cross-section of assets with different liquidity, and relax some strong assumptions of standard general equilibrium asset pricing models, such as, frictionless markets, no arbitrage, agent optimality and equilibrium. Contrary to empirical evidence, implications of these assumptions would predict that securities with identical cash flows be priced equal. The simplest definition for liquidity is the ease of trading a security, though, in the LAPM and Kiyotaki—Moore-model, liquidity is defined the aggregate value of the ability to transfer wealth across time. Illiquidity stems from exogenous transaction costs, demand pressure, inventory risk, private information and search frictions. Illiquid assets are harder to sell and require a discount, which translates into higher interest compared to a more liquid asset or a higher price commanded by the more liquid asset ceteris paribus; this is the liquidity premium. The intertemporal CAPM predicts that an asset’s price is the expectation of the product of the asset’s payoff and consumer’s intertemporal marginal rate of substitution. The LAPM presents an alternative approach based on corporations’ desire to store liquidity in order to fulfill their future payments. Unlike most real business cycle models, which feature a borrowing constraint, the Kiyotaki—Moore-model features a resalability constraint that allows the model to price the level of liquidity and liquidity risk. The main difference between the models is the LAPM full state contingency, which is why there is no need for money to circulate.
  • Linnupöld, Karl (2024)
    In the last years research on the topic of Central Bank Digital Currencies (CBDCs) has increased. With the rise of private digital currencies both as a challenger to the status quo and an example of future possibilities of money, many new studies have been released. Practical interest has been on the study of retail CBDC as a replacement of the current monetary system. Majority of the literature comes from central banks themselves in additional to academic research close to the topic. This paper provides overview of the economics of CBDC and motives behind its implementation. CBDC provides new tools for central bank and enables it to practice new policy. Literature shows risks on financial stability and especially the rethinking banking as we know it. The design of CBDC must be right to have impact and ensure wide adoption. CBDC will work as defence on monetary sovereignty if a private digital currency threatens to overtake. CBDC would give the central bank a lot of power, making it a prime target of political meddling and having its independence be in question. On banking CBDC would increase the chances of a bank run happening but would force commercial banks to complete for deposits. It could boost the overall welfare by lowering transaction costs. CBDC would increase financial inclusion and enables completely domestic payment system. In the end many of the result are theoretical and lack empirical evidence. CBDC does not directly fix any large problem in the monetary system. Implementing CBDC could be costly flop and there are many things that could go wrong when it launches. Results show that more research and debate is needed on the topic.
  • Knuutila, Tatu (2022)
    This thesis focuses on rational bubbles. Such bubbles belong to the indeterminacy school of economics, as they deviate economic models from equilibria determined by real economic factors. This topic is studied with an extensive literature review on theoretical and empirical research. Additionally, independent contributions provide new evidence and elaborate the topic further. The theoretical literature review begins afar to elaborate the classic Samuelson–Diamond–Tirolé model. However, this classical model is deterministic, dynamically inefficient and contractionary. Review of literature focus on models that overcome these three weaknesses. Particularly models with financial frictions are given much emphasis. Rational bubbles are explored better in the framework of a Martin–Ventura model. The Martin–Ventura model is similar to the classic model, but it overcomes classical weaknesses with stochasticity and financial frictions. This model is examined in close detail and simulated to show how rational bubbles can have both expansionary and contractionary effects in the Martin-Ventura model. Bubble detection is an inseparable part of bubble theory, and empirical literature review introduces different testing strategies. For example, variance bound, unit root, cointegration, instrument variable and switching-regime tests are discussed. However, many tests require data on many variables that are hard to observe, and nevertheless, the results are often ambiguous. The independent empirical work uses aggregate data of GDP, capital stock, consumption and wealth in the USA and Japan. The aggregate wealth is univariately tested for bubbles with the PSY test and autoregressive change in persistence tests. The multivariate real effects of prospective wealth bubbles are studied with vector autoregression. Similar univariate and multivariate approaches have been applied in other empirical studies, but in more specific setups. The literature review shows that rational bubbles are a diverse concept that is extendable to many models. Furthermore, such indeterminacies often improve the performance and plausibility of conventional (deterministic) models. The theoretical and empirical contributions are aligned and provide tentative bubble-supporting evidence in both countries. In light of the results, both economies, the USA and Japan, have accommodated economic bubbles during the last five decades.
  • Räsänen, Tatu Tapio (2021)
    Urbanization is one of the megatrends of our time. Urbanization means that the cities have become more substantial and more critical economic centers. City population had an increasing trend both in Stockholm and Helsinki for the years 1990–2019. Migration has been one of the factors driving this urbanization process. Increased attractivity of the urban housing leads to an increased demand for urban housing which affects the urban housing market dynamics. Stockholm and Helsinki, as metropolitan cities, had an increasing trend to the real house prices of the old apartments for most of the years 2005–2019. The primary purpose of the study is to test whether excess migration explains the real house prices. Moreover, the role of the real income per person, the real interest rate, the new apartment construction, and the unemployment rate for the house prices is examined. House prices are linked to the household wealth and the private consumption. Besides, the house prices affect people’s ability to move into new areas for receiving a new job. In the empirical part, the house price determinants for the old apartments are examined by adapting a two-stage least squares model for the panel data from Stockholm and Helsinki for the years 2005–2019. Overall, the observation period comprises a timeframe of almost the two first decades of the millennium era starting from the aftermath of the Tech bubble. Furthermore, this period includes the global financial crisis in the year 2008 and European debt crisis that began in the year 2010. The adapted data of the house prices, excess migration, real income per person, the new completed apartments, and the unemployment rate are on the municipality level data. The real interest rates are computed from the national level data from Sweden and Finland, except for the 6 months EURIBOR that is data from the euro area level. The data is provided by Valueguard, Statistics Sweden, Statistics Finland, OECD, and the City of Helsinki. The empirical results strongly indicate that the real interest rate and the real income per person affect the house prices of the old apartments in Stockholm and Helsinki in the years 2005–2019. However, the empirical results do not give a statistically significant estimate to the role of the excess migration to explaining the house prices. Furthermore, the estimates for the new completed apartments and the unemployment rate are statistically insignificant that hampers the analysis of for these determinants as explanatory variables for the house prices. The empirical findings about the role of the real income affecting the house prices are in line with the previous findings from the Swedish and Finnish housing markets. Moreover, the previous findings from the Finnish housing markets support the finding of the real interest rate affecting the house prices. The empirical findings underline the importance of the availability of the macroprudential tools for preventing a possible overheating of the housing markets at the low interest rate environment. Furthermore, the findings highlight the need to closely monitor the household indebtedness and the share of the household income that is used to the loan instalments. Furthermore, the results lead to ask, whether the housing markets are capable of supporting migration to these cities from the areas where the real income level is smaller compared with Stockholm and Helsinki for taking a job.
  • Markkanen, Ville (2019)
    Prices of different products are followed by statistical offices in order to produce price indices. The quality of products is constantly changing due to creative destruction. When a product leaves market, its price is computed with a method called imputation. Recent studies in United States and France have found that use of imputation may lead to upward bias in inflation. Since price indices are used as deflators when calculating economic growth, such a bias would mean that some of the growth is missed. The aim of this thesis is to study whether such a bias exists in Finland and how large it is. In addition, the channels of innovation induced growth are studied in order to determine from where the potentially missed growth originates. Creative destruction has been incorporated into economic growth models in the early 1990s. In its centre, are firms at the microlevel that innovate and create new products and improve existing ones. It has been shown that it is a key element when economic growth is concerned. New products and improving quality of old varieties is, however, widely recognised problem for price indices. Sources of bias for price statistics has been studied a lot and the changing quality of products is one of the greatest of them. This thesis contributes to this field by recognising a new possible source of bias and its magnitude in Finnish economy. The model used in this thesis is from 2017 paper by Aghion, Bergeaud, Boppart, Klenow and Li. The model is a new keynesian DSGE model with exogenous innovation and it provides an accounting framework which enables the quantification of missing growth. The missing growth is estimated using a so-called market share approach, where market shares of incumbent and entrant producers are exploited to quantify the share of growth that is missed yearly. Another method, namely indirect inference, relies on simulation of the economic growth model. It infers the arrival rates and step sizes of different types of innovations: incumbent of innovation, creative destruction and new product varieties. The simulation also enables for finding the contributions of those innovation types for the economic growth. The contributions provide information on from which type of innovation the majority of growth comes. Both methods use data provided by Statistics Finland. They use micro level data on private enterprises in Finland during the years 1989 – 2016. The market share approach requires establishment level data and information on the revenue and employment. The indirect inference method uses the same data aggregated on firm level for the years 1993 – 2013. In addition, the simulation requires total factor productivity growth rate for the given years. The results suggest that 0.489 percentage points of growth has been missed yearly in Finland during the years 1989 – 2016 when calculated with revenue data. The missed growth was estimated to be 0.532 percentage points per year with employment data. The results are comparable in magnitude with the results from the United States and France. The magnitude has remained stable over the years. The indirect inference method suggests that most of the growth comes from incumbent own innovation: 59.3% in 1993 – 2003 and 57.8% in 2003 – 2013. The rest is due to creative destruction and new product varieties either by incumbents or entrants. If 0.5 percentage points of growth is missed every year, it would have had significant effects on the economy. For example, many social benefits are tied to price indices and over estimation of them would mean that the benefits have not risen as much as they should have. Given the systematic nature of the bias, the central bank should consider increasing its inflation target. The statistical offices that produce the price statistics may be able to lower the bias if they manage to keep up to date with incumbent own innovations, since the majority of growth is originating from it. Also chain linked index helps lowering the bias by updating the sample and weights on a yearly basis. Additional research is needed in order to find solutions to overcome the bias caused by creative destruction and imputation of missing prices.
  • Pyykkö, Alli (2023)
    Sukupuolten väliset palkkaerot ovat pienentyneet paljon viime vuosisadan aikana, mutta ne ovat edelleen merkittäviä. Perinteisessä taloustieteessä palkkaeroja selitetään henkisen pääoman ja preferenssien sukupuolieroilla sekä diskriminaatiolla. Äitiysrangaistus selittää palkkaeroista suuren osan. Palkkaeroista osa on edelleen selittämätöntä. Viimeaikaisessa taloustieteellisessä tutkimuksessa on pyritty löytämään uusia syitä palkkaeroille kokeellisen taloustieteen avulla. On havaittu, että kilpailuhenkisyyserot vaikuttavat sukupuolten koulutus- ja uravalintoihin sekä työmarkkinakäyttäytymiseen ja sitä kautta palkkaeroihin. Tutkielma käsittelee kilpailumieltymysten vaikutusta palkkaeroihin kokeellisen taloustieteen näkökulmasta. Kokeellisessa taloustieteessä vaikuttavia tekijöitä pystytään kontrolloimaan. Tutkielma on kirjallisuuskatsaus. Aineistona käytetään tuoreita aiheesta tehtyjä tutkimusartikkeleita, jotka on julkaistu arvostetuissa tieteen aikakausilehdissä. Tutkielmassa vertaillaan eri tutkimusten koemenetelmiä, ja niistä saatuja tuloksia. Tutkielma tarjoaa lukijalle laajan kuvan palkkaeroja selittävistä tekijöistä ja kilpailuhenkisyyden vaikutuksen laajuudesta. Miehet ovat naisia kilpailuhenkisempiä ja itsevarmempia. Kilpailumieltymyksiin vaikuttavat vahvasti uskomukset omista sekä vastustajan taidoista. Kilpailuhenkisyys korreloi positiivisesti matemaattisten ja teknisten koulutusvalintojen kanssa, jotka ovat yhteydessä korkeaan palkkatasoon. Kilpailuhenkisyys vaikuttaa koulutus- ja uravalintojen lisäksi työnhakuun ja neuvotteluhalukkuuteen
  • Ivaska, Juho (2021)
    Abstract Faculty: Social Sciences Program: Economics Line of study: General line Author: Juho Ivaska Name of work: Mitigating the Covid-19 shock – A simulation study on the cost compensation schemes of Finland, Norway and the United States Type of work: Master’s thesis Month and year: 11/2021 Number of pages: 43 Keywords: Corporate subsidies, Covid-19, Simulation Storage location: University of Helsinki library Abstract: During the Covid-19 pandemic, many countries implemented sizeable support programs for companies suffering from the pandemic. This thesis compares the effectiveness of the Business Cost Support of Finland, the Norwegian Business Compensation Scheme and the Paycheck Protection Program (PPP) of the USA in terms of mitigating pandemic effects on firm profitability, liquidity and solvency. All three programs are cost support schemes but they differ in what costs are covered and in their eligibility criteria. The comparison is executed by simulating the pandemic-induced turnover shock on Finnish enterprises under each support scheme. Statistics Finland’s detailed Financial statement data from 2019 provides the starting position for the simulation. The turnover shock is one year of length and assigned to firms based on their industry code. Effectiveness of the support schemes is measured by mitigation rate which describes the share of the effects of the pandemic that the scheme can mitigate. Additionally, the costs of the schemes are considered. This thesis finds that the Norwegian scheme was the most effective in decreasing the number of unprofitable firms as well as the number of firms with liquidity troubles. It ranks the highest in all but one measure even when adjusted by its second highest price. The Finnish scheme yielded the highest price-adjusted mitigation rate in average quick ratio but trailed the Norwegian scheme slightly in all other categories. The PPP was the most expensive of the support schemes and thus the least effective in all the profit and liquidity related measures. This thesis concludes that compensating fixed costs and targeting the support carefully were crucial in supporting the worst hit businesses for a reasonable price. The Finnish Business Cost Support fared well compared to its counterparts but allowing for higher and lower single support payments would have most likely increased its effectiveness. If the target of the scheme is maintaining employees on firm payrolls, a pure wage compensation scheme as the PPP yields better results.
  • Lampinen, Sebastian (2022)
    Modeling customer engagement assists a business in identifying the high risk and high potential customers. A way to define high risk and high potential customers in a Software-as-a-Service (SaaS) business is to define them as customers with high potential to churn or upgrade. Identifying the high risk and high potential customers in time can help the business retain and grow revenue. This thesis uses churn and upgrade prediction classifiers to define a customer engagement score for a SaaS business. The classifiers used and compared in the research were logistic regression, random forest and XGBoost. The classifiers were trained using data from the case-company containing customer data such as user count and feature usage. To tackle class imbalance, the models were also trained with oversampled training data. The hyperparameters of each classifier were optimised using grid search. After training the models, performance of the classifiers on a test data was evaluated. In the end, the XGBoost classifiers outperformed the other classifiers in churn prediction. In predicting customer upgrades, the results were more mixed. Feature importances were also calculated, and the results showed that the importances differ for churn and upgrade prediction.
  • Myllylahti, Iiro (2023)
    The formation and determining factors of commute distance are of prime interest in the fields of urban and labour economics, as the length of the commute is a result of many different personal and economic factors. Knowledge of what factors affect the length of the commute could be of use in public policy as well as future research. Similarly, empirical studies on the effect of income specifically have historically shown mixed results due to bias resulting from reverse causality, and inadequate methods of correcting for it. To date, there has been little research on commuting in Finland, and no studies specifically focused on the effect of income. This study applies a cross-sectional linear regression model, as well as fixed effects model utilizing panel data to both survey the overall effects of a large catalogue of determinants of commute distance and correct for the issue of reverse causality in the income-commute relationship. The study focuses on the year 2020 but utilizes data from 2015 to construct the fixed effects model. The fixed effects model is used to determine the relationship between income and commute distance as it, by definition, corrects for omitted variable bias. In addition, with a data selection of workers who remain with their employer during the observational period, the reverse causality issue is eliminated. The results indicate that the relationship between income and commute distance in Finland is negative in the fixed effects model when reverse causality is eliminated, which contrasts with basic theory and a large majority of previous studies. Sensitivity tests suggest that this result is not merely a superfluous outcome of the data selection process, but a genuine result. Out of the other determinants, it was found that educational effects in Finland, especially for men, are different than what was expected, as higher education was associated with a decrease in commute distance. Being married, male and having a car were found to positively affect commute distances. The number of children in the household was found to be a negative factor, especially for women. The result of this study reinforces the notion that studying the relationship between income and commute distance without correcting for reverse causality will lead to biased estimates. Additionally, compared to the few previous studies that have sufficiently done so, this study suggests a pattern. As a negative relationship between income and commute distance has only currently been established in Denmark and Finland, as compared to the positive results in larger European countries, possible future studies could elaborate on whether this effect is characteristic of the Nordic countries, or smaller countries in general.
  • Purkunen, Aleksi (2022)
    In the Finnish private sector pension system, a part of the pension benefits is financed from funds where a part of the pension contributions has been deposited earlier. The funds collected for the future pensions and different buffer funds constitute the pension providers’ pension liabilities, i.e. the technical provisions. The investment assets acquired with the accrued funds and the returns on them constitute the pension providers’ pension assets. The private sector pension providers must supplement (i.e. increase) their funds reserved for financing the future pensions according to their common rules. These rules, which have an impact on the solvency risk management of the pension providers, constitute the fund transfer obligation. One component of the fund transfer obligation is the supplementary factor, which adjusts the rate at which the old-age pensions are funded, and hence the growth rate of the technical provisions to the average solvency of the pension providers. In 2021, the Ministry of Social Affairs and Health drafted a proposal with significant modifications to the supplementary factor. The proposal includes guidelines for the new formula for the supplementary factor and an option to recalculate its value more frequently than before. However, the proposal also acknowledged that there is no quantitative backing for the benefits from the more frequent recalculation of the supplementary factor. As a part of the results, we will provide the first quantitative estimates of the benefits. In this thesis, we analyze how the recalculation frequency of the supplementary factor affects the solvency risk of the private sector pension providers in aggregate. We find that the more frequent recalculation of the supplementary factor in its current form slightly reduces the solvency risk of the aggregate pension provider without significant drawbacks. This reduction in the solvency risk is magnified when an alternative formula for the supplementary factor aligned with the proposal by the Ministry of Social Affairs and Health is introduced. However, at the same time, the formula aligned with the proposal increases the risk in the rate at which the old-age pensions are funded. In conclusion, we find it recommendable to recalculate the supplementary factor more frequently than currently with both formulas considered.
  • Leirimaa, Jani (2020)
    Forskningens syfte är att förstå och kvantifiera energifattigdom I Finland inom ramen för statsvetenskap och välfärdspolitik. Energifattigdom är ett fenomen med flera potentiella faktorer som orsakar det. På grund av detta behövs ett brett spektrum av policy och politik för att lindra det. Energifattigdom är ett fenomen som nyligen fått mera fokus inom akademisk forskning i europeisk aspekt. Två metoder användes för att undersöka energifattigdom i Finland. Först genomfördes en kvalitativ sekundär analys for att känna igen och kategorisera resultat för orsaker och korrelationer för energifattigdom inom tidigare forskning. Baserat på tidigare hittade korrelationer gjordes antaganden om vad som orsakar eller riskerar att orsaka energifattigdom. Detta behövdes på grund av att energifattigdom inte mäts i Finland. Efter kvalitativa sekundär analysen gjordes en kvantitativ analys för att undersöka den regionala signifikansen av varje faktor för energifattigdom. Den kvalitativa sekundär analysen av tidigare forskning lyckades och gav korrelationer och orsak till energifattigdom. Den regionala betydelsen av varje faktor kvantifierades framgångsrikt och antaganden kunde göras för fenomenet i Finland. Till slut diskuteras de politiska- och policyansträngningar som direkt eller indirekt redan tagits i Finland för att lindra energifattigdom.
  • Isola, Josefina (2021)
    In recent years international tax issues have attracted wide global attention. International tax rules that were designed more than a century ago have weaknesses that create opportunities for base erosion and profit shifting. Multinational corporations use various profit shifting channels, such as transfer pricing and debt shifting, to minimize their corporate taxes. Also, increased digitalisation of the economy poses challenges for the current rules. The primary objectives of the thesis are to analyse how countries choose their corporate income tax rates when tax bases are mobile, what is the extent of tax avoidance, and what policy proposals are suggested to fight tax avoidance. The thesis is an overview to the profit shifting and tax competition literature. The theoretical framework consists of two workhorse models of tax competition, The Zodrow, Mieszkowski and Wilson model and the Kanbur-Keen model. The theory part is accompanied by two studies: one that is an overview to empirical literature on profit shifting and another one exploiting new macroeconomic data and analysing how tax differentials affect the profit shifting between countries. Regarding the policy proposals, the thesis focuses on the Base Erosion and Profit Shifting project, but four alternative schemes are also discussed. Fundamental reform is needed for current tax rules. Functioning international tax system requires even more coordination between countries than has been achieved, as uncoordinated and unilateral approaches cause adverse spillovers and distortion. The interests and capacities of developing countries and low-income countries need to be considered in decision-making as they are more reliant on the corporate income tax revenues compared to advanced countries.
  • Reiterä, Tuomas (2021)
    Kansaneläkelaitoksen (Kelan) järjestämän ammatillisen kuntoutuksen tavoitteena on auttaa kuntoutusasiakasta sopivan ammattialan valinnassa, työllistymisessä sekä työelämässä pysymisessä tai sinne palaamisessa. Tässä tutkielmassa pyritään selvittämään ammatillisen kuntoutuksen vaikutusta 16-29-vuotiaan kuntoutusasiakkaan myöhempään elämäntilanteeseen tarkastelemalla työkyvyttömyyteen tai työttömyyteen liittyvien sosiaaliturvaetuuksien saamista kolme vuotta kuntoutuksen hakemisen jälkeen. Ammatilliseen kuntoutukseen päätymistä ei voida pitää satunnaisena, vaan kuntoutuspalveluihin ohjataan tavallisesti yksilöt, jotka ovat terveydellisistä syistä kuntoutuksen tarpeessa, mutta kykeneviä hyötymään tarjotusta kuntoutuspalvelusta. Sekoittavat tekijät vaikuttavat sekä kuntoutukseen pääsemiseen että myöhempään sosiaaliturvan saamiseen, mikä tekee myönteisestä kuntoutuspäätöksestä sosiaaliturvan saamiselle niin sanotun endogeenisen selittäjän. Tutkielman aineistona toimivat Kelan etuusrekistereistä poimitut tiedot 16-29-vuotiaiden henkilöiden vuonna 2016 tekemistä ammatillisen kuntoutuksen hakemuksista, joihin on yhdistetty tiedot henkilöille vuonna 2019 maksetuista Kelan sosiaaliturvaetuuksista. Tutkielman menetelmänä endogeenisen selittäjän luoman estimointiharhan ratkaisemiseksi käytetään instumenttimuuttujaestimointia. Instrumenttina myönteisen kuntoutuspäätöksen saamiselle käytetään tietoa siitä, millä osuudella henkilön kuntoutushakemuksen käsitellyt kuntoutusratkaisija on tehnyt muille käsittelemilleen hakemuksille myönteisiä päätöksiä, eli niin sanottua ratkaisijan ''ankaruutta''. Myönteisen kuntoutuspäätöksen kausaalivaikutusta maksettuihin sosiaaliturvaetuuksiin estimoidaan kaksivaiheisella pienimmän neliösumman (2VPNS) estimoinnilla kuljettaen rinnalla myös perinteistä pienimmän neliösumman estimointia. Tutkielman tulokset käsittelevät ratkaisijan ankaruuden kelpoisuutta myönteisen kuntoutuspäätöksen saamisen instrumentiksi, sekä myönteisen kuntoutuspäätöksen vaikutusta henkilön vuonna 2019 saamaan sosiaaliturvaan. Saatujen tulosten perusteella kuntoutusratkaisijan ankaruus täyttää instrumentin eksogeenisyydelle, relevanssille ja monotonisuudelle asetetut oletukset. 2VPNS-estimoinnin tulokset näyttävät, että myönteisellä kuntoutuspäätöksellä on toimeentuloa turvaavien sosiaaliturvaetuuksien saamista kasvattava vaikutus. Ratkaisijan ankaruutta validina instrumenttina käyttävä 2VPNS-estimointi ilmentää myönteisen kuntoutuspäätöksen kausaalivaikutusta eri sosiaaliturvaetuuksien saamiseen. Saatuja tuloksia tulkittaessa otetaan kuitenkin huomioon estimaattien epätarkkuus ja niiden paikallinen luonne. 2VPNS-estimointi ilmentää myönteisen kuntoutuspäätöksen kausaalivaikutusta rajatapauksilla, joiden kuntoutukseen pääsy on riippunut kuntoutusratkaisijan ankaruudesta.
  • Saada, Adam (2018)
    Logistic regression has been the most common credit scoring model for several decades. The purpose of a credit scoring model is to distinguish good applicants from bad applicants so that the consumer credit can be lent to a person who is likely to repay it. In Finland, households' indebtedness has increased while wage development has stagnated. In addition to mortgage, indebtedness has increased because of the rising number of consumer credit loans. Consumer credit is usually unsecured loans, which are provided by several financial institutions quickly and flexible. Consumer credit is considered to be one of the major causes of default. Systematic risks are still being avoided for now, but the increased number of customers and the fierce competition in the sector can bring new risks that should be anticipated, as insolvent customers are making losses to financial institutions. Developing and deploying new credit scoring models is one of the best ways to hedge against default risks. The prediction accuracy and performance of tree-based credit scoring models have been studied. In many cases, tree-based algorithms have performed better than traditional statistical models such as the earlier mentioned logistic regression. In this master's thesis classical logistic regression is compared to these tree-based algorithms. The most well-known tree-based algorithms have been chosen, which are random forest, discrete Adaboost, real Adaboost, LogitBoost, Gentle Adaboost and Gradient Boosting. These methods use the tree algorithm as the base learner but differ in their iterative processes. The data that has been gathered from a Finnish medium-sized financial company, consists of customer's personal information and their payment behavior of sales finance. It is important to compare how different models predict insolvency in the light of different test statistics. In this thesis, the best-performing models are logistic regression and the Gradient Boosting algorithm. From my research's point of view, it is recommended to develop a credit scoring model based on the Gradient Boosting algorithm. This algorithm discloses different explanatory variables compared to logistic regression. These variables can explain better the causes of insolvency. The results are robust and plausible, because the different tests give similar conclusions.
  • Pyykölä, Sara (2022)
    This thesis regards non-Lambertian surfaces and their challenges, solutions and study in computer vision. The physical theory for understanding the phenomenon is built first, using the Lambertian reflectance model, which defines Lambertian surfaces as ideally diffuse surfaces, whose luminance is isotropic and the luminous intensity obeys Lambert's cosine law. From these two assumptions, non-Lambertian surfaces violate at least the cosine law and are consequently specularly reflecting surfaces, whose perceived brightness is dependent from the viewpoint. Thus non-Lambertian surfaces violate also brightness and colour constancies, which assume that the brightness and colour of same real-world points stays constant across images. These assumptions are used, for example, in tracking and feature matching and thus non-Lambertian surfaces pose complications for object reconstruction and navigation among other tasks in the field of computer vision. After formulating the theoretical foundation of necessary physics and a more general reflectance model called the bi-directional reflectance distribution function, a comprehensive literature review into significant studies regarding non-Lambertian surfaces is conducted. The primary topics of the survey include photometric stereo and navigation systems, while considering other potential fields, such as fusion methods and illumination invariance. The goal of the survey is to formulate a detailed and in-depth answer to what methods can be used to solve the challenges posed by non-Lambertian surfaces, what are these methods' strengths and weaknesses, what are the used datasets and what remains to be answered by further research. After the survey, a dataset is collected and presented, and an outline of another dataset to be published in an upcoming paper is presented. Then a general discussion about the survey and the study is undertaken and conclusions along with proposed future steps are introduced.
  • Wrede, Iris (2020)
    Researchers debate the skills required in future jobs and which skills are of particular consequence for the mobility of labor. In researching this topic, many turn to online job vacancy advertisements as a source of abundant, naturally-occurring data. Despite the great interest, economics research has often overlooked the nature of job vacancy ads as a context-bound genre of text and the implications that has on the analysis. This thesis has two aims. First, the thesis critically considers the use of online job vacancy advertisements as data for research on labor markets, seeking to advance the methodological rigor in studies using this type of data, particularly that from a Finnish context. Second, to consider, on the basis of a two-stage mixed methods analysis of online job vacancy advertisements published on a Finnish online job board in 2017-2019, the type of skills that Finnish employers call for in successful applicants in professional private sector jobs. This thesis also elaborates on the language aspect of online job vacancy advertising in Finland. The descriptive statistics of the random sample would seem to confirm the repetitive nature of job vacancy ads and trends in employee ideals which have been discussed in the literature. For example, it would seem that there is a greater focus on interpersonal skills compared to intrapersonal skills. The increasingly globalized nature of the Finnish labor market and workplace is also reflected in the data. Although job advertisements are a tempting source of data, in their current free-form state, some doubt can be cast on their relevance as a source of meaningful data on skills. Historical, geographical and other contexts must be carefully considered in analysis in order to avoid overstating the implications of findings and to better situate and analyze the observed tendencies and trends found in the data. Europe-wide efforts to improve job matching may as a by-product, produce a more robust source of data, should it be readily adopted.
  • Leino, Nea (2021)
    The aim of this research is to examine the impact of population aging on income inequality in Finland over the time period from 1991 to 2016. The research question is relevant since population aging is a part of reality around the world because of the declining trend in the rate of birth in addition to greater longevity. These vast demographic and socio-economic changes stress the well-being of nations. This study offers some important insights into the discussion of income inequality in Finland as no similar study has been conducted before. Understanding the link between aging and income inequality will help us to direct our attention to where policy decisions might need to be directed if inequality is seen to grow adversely. This study will be carried out by both a decomposition analysis and a shift-share analysis. These methods are commonly used for examining the contribution to inequality of particular characteristics, as they manage to gauge the relative importance of different determinants in overall inequality. These methods will be applied to the traditional inequality measures belonging to the family of generalized entropy (GE), such as the mean logarithmic deviation, Theil’s index, and the half-squared coefficient of variation. The use of multiple different measures in inequality research is recommendable, for they provide information about the distribution from different perspectives, and clarify where in the distribution the change has taken place. In order to study the impact of population aging on income inequality, the population was partitioned into five different age cohorts; 0-39, 40-60, 61-65, 66-70, and 71+, and one- or two-person households were examined in this research. Data for this study was received from the Luxembourg Income Study Database (LIS). Income inequality was investigated by disposable household income, which was equivalized by the square root scale. The decomposition analysis allows us to answer the question of how much of total inequality is attributable to variability in the first subgroup, in the second, etc., and how much to between subgroups. To complement the results from the decomposition analysis, by the shift-share analysis we are able to simulate such a Finland which would have not aged at all since both 1991 and 2000 while other factors remain unchanged at the 2016 level. The results of the decomposition analysis led us to a clear conclusion that variations within groups are much more significant in the formation of total inequality than the variations between groups. In the light of the shift-share analysis, interestingly, the aged Finland is less unequal than the Finland, which would use the population shares of 1991 and 2000. Hence such a study of aging, which only examines changes in population shares ceteris paribus, shows that aging has slowed down the rise of inequality in Finland. This is because the age structure of the years 1991 and 2000 put the most weight on people in most unequal, or second most unequal age group in proportion to other age groups than the population distribution of the year 2016.