Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by study line "Research track"

Sort by: Order: Results:

  • Airamo, Niko (2022)
    Regret is the value lost by playing an action on the current round of a iterative game. The idea of regret matching is to generate strategies that minimize regret, since it is guaranteed by folk theorem to converge to Nash Equilibrium in a two-player zero-sum game. Storing cumulative regrets after each iteration enables to use of regret matching. It is an algorithm that chooses next iteration strategy based on the cumulative regrets of the actions. This procedure itself would converge to Nash Equilibrium in a normal form game zero-sum game. For extensive form game the data storing will become too resource demanding, when the game size is even moderate. However, it's possible to minimize regret on each information set separately using counterfactual regret. Counterfactual regret is calculated by using counterfactual values. Counterfactual value calculates expected value given that player tries to get to an information set and play a certain action in it. The difference between the action and expected value of the information set is counterfactual regret. Similarly minimizing regrets converges toward Nash Equilibrium. Using counterfactual regret minimization framework I create an iterative self-play algorithm to solve two-player zero-sum imperfect information games. In this thesis I work with last betting round of limit Hold'Em. CFR+ uses improved strategy averaging, does not store negative regrets and alternates between strategy updating player. I did eventually manage to construct CFR+ algorithm, and it seems extremely effective compared to earlier versions of the algorithm.
  • Apell, Kasperi (2022)
    The phrase 'central limit theorem' has commonly come to stand for a result where partial sums of random variables converge to a gaussian random variable in the sense of distribution. Theorems of this nature readily yield applications to statistics and econometrics since they form the theoretical basis of approximating the sampling distribution of a given test statistic when the exact distribution may be intractable or otherwise infeasible to be retrieved. Faced with such a situation, a researcher can instead ask whether the test statistic, or a certain transformation of it, converges in distribution as the sample size grows without bound. If the answer is in the affirmative, then one may in a principled manner approximate the distribution of the finite-sample statistics with that of the limit distribution such that the approximation can be made in some sense arbitrarily good by sufficient increases in the sample size. Naturally, similar procedures apply in the case of estimators. These asymptotic normality results for econometric estimators, as they are called, require differing conditions to be satisfied depending on the nature of the data-generating process where the observations are thought to originate from. This thesis examines a selection of foundational central limit theorems in the cases of I.I.D., independent, D.I.D., and dependent data-generating processes and presents examples of their econometric applications, primarily to deduce asymptotic normality for a selection of key econometric estimators.
  • Holster, Tuukka (2019)
    The design of college admissions has been a heatedly discussed topic in Finland, as recent government initiatives have led to a more centralized system. Some argue for letting colleges decide on their admissions procedures, while others believe that a centralized matchmaking procedure with priorities determined by the matriculation examination would be more cost-effective. This thesis aims to characterize various factors that the policy maker must take into account when designing a college admissions procedure, in light of existing theoretical research on both centralized and decentralized matching markets and empirical studies on social determinants of college choice and the capacity of entrance examinations to elicit information on student ability and motivation. The two-sided matching literature is discussed extensively because of its usefulness for designing centralized clearinghouses for matching markets. The student-proposing deferred acceptance algorithm emerges as the best choice for a policy maker who regards strategy-proofness and respecting of priorities as especially important, at least if manipulation by colleges is implausible. However, strategy-proofness is fragile in practical applications, applicants may try to manipulate also strategy-proof mechanisms and reporting the whole preference relation is still only weakly dominant. Consequently, satisfaction of reported preferences should not be taken as evidence of welfare properties of a matching without qualifications. The use of a common entrance examination may be more cost effective than a system based on college-specific entrance examinations, as colleges do not then need to spend resources on organizing the examinations. However, students have then stronger incentives to perform in the common entrance examination, and there is already evidence that more students retake the matriculation examination in Finland. The overall effect on the costs of organizing entrance examinations is an uncertain empirical matter. The importance of preparation courses is likely to decrease, which saves resources and contributes to socioeconomic equity. On the other hand, making students choose on their study paths earlier in life may erode socioeconomic equity. A larger role for the matriculation examination provides stronger incentives for showing effort in high school, which the policy maker may see as beneficial. While a system with a common entrance examination makes it possible for a student to get admitted to a second preference when she is rejected by her first preference, it remains an empirical question to what extent this reduces the propensity to apply again to competitive colleges. The excess demand for certain colleges is a result of student preferences and is not solvable by any mechanism that gives a strong priority to satisfying student preferences.
  • Markkula, Tuomas (2020)
    This thesis evaluates the effects of entry on incumbent firms' prices and procedures volume in dental care markets using difference-in-differences regression and administrative data on private dental care visits reimbursed by the Social Insurance Institution of Finland. The entry is considered as a competition increasing shock. The entrant's prices were remarkably low at the time of the entry and the firm was able to acquire a large share of volume in common procedures performed at the market. Thus, the entrant offered a real low-cost alternative to the residents of the Capital Region. I focus on examinations and fillings, which are two of the most common procedures. Patients face switching costs when changing their dental care provider. This means that incumbent firms with locked-in customers might be able to accommodate the entry easier, than without the switching costs. The results show that incumbent firms do not lower their prices in response to the entry by economically significant amount. However, the results suggest that incumbent firms perform less fillings after the entry. The effect is driven by summer months. The pattern where the incumbent firms do not change their prices and lose a share of their turnover to the entrant is consistent with the theoretical switching costs literature.
  • Anttonen, Jetro (2019)
    In this thesis, a conditional BVARX forecasting model for short and medium term economic forecasting is developed. The model is especially designed for small-open economies and its performance on forecasting several Finnish economic variables is assessed. Particular attention is directed to the hyperparameter choice of the model. A novel algorithm for hyperparameter choice is proposed and it is shown to outperform the marginal likelihood based approach often encountered in the literature. Other prominent features of the model include conditioning on predictive densities and exogeneity of the global economic variables. The model is shown to outperform univariate benchmark models in terms of forecasting accuracy for forecasting horizons up to eight quarters ahead.
  • Tahvonen, Ossi (2021)
    Despite continuous improvements in treatments, childhood cancers are among the most common causes of death for children in Finland. The cancer treatments are often arduous and have long-lasting effects even beyond the person diagnosed. Estimating these effects is important, since they can affect the cost-effectiveness of many policies. This thesis focuses on estimating the effects of childhood cancer on parental labour market outcomes, especially on the earnings gap between genders. Estimating the causal connection between health and socioeconomic variables is difficult for many reasons. In this thesis, a quasi-experimental method called "staggered differences-in-differences" is employed to solve this problem. In this method, families where cancer is diagnosed are compared to those that are diagnosed at a different time, and this way the true causal effect can be estimated. The thesis used administrative data from all childhood cancer diagnoses in Finland during years 1999-2017. The results show that childhood cancer reduces parents' income significantly. In short run, the effect is around 30% of the income before the diagnosis for mothers and around 7% for fathers. For mothers, the decline in employment is also significant. The welfare state provides support to these families to the extent that the decline in the income after transfers is not as large. The gender earnings difference increases around 20% in the short run, and increases also in families where mother is the main provider in the years before the diagnosis. My results are robust to different checks, including alternative estimators to correct for possible cohort-heterogeneous effects. The previous research is scarce and has provided differing estimates, but the results of this thesis are in line with the most relevant literature. The decline in earnings can be caused by the need to take care of the child, mental health effects or declined accumulation of human capital. While it is hard to suggest policy changes based on these results alone, the current benefits around ill children are short in duration and focused on one person and the results indicate that perhaps a longer benefit scheme distributed more evenly between genders might provide different outcomes.
  • Haaga, Tapio (2020)
    I study whether modest copayment increases affect the general practitioner (GP) use in Finland, a country with relatively low copayments, low inequality, and an extensive welfare state. I also examine whether the estimates are driven by certain low–income groups considered to be economically vulnerable. The Finnish Government allowed municipalities to increase copayments in 2015 and 2016 by 9.5% and 27.5% respectively. At maximum, this meant that the copayment for a GP visit was 20.90 euros at the beginning of 2016, approximately 40% higher than at the end of 2014. Almost all municipalities made the 9.5% increase at the start of 2015, but some of them decided not to make the 27.5% increase at all or made smaller increases. I exploit this variation by estimating two–way fixed effects regression models, and I use population–wide administrative data containing all primary healthcare visits in 2013–2018 and socioeconomic information on patients. In the models, copayment increases are negatively associated with both GP use and median waiting times. Based on the means of estimates from several specifications, the 27.5% increase alone is associated with a 2% decrease in visits per resident in the first four quarters after the change and a 6% decrease thereafter. The estimates are not statistically significant. The median waiting times decrease by two days in the first year after the change and five days thereafter, and these results are significant. When I estimate the effects of both increases, the means of estimates are now a 5% decrease in visits per resident in the first four quarters after the last increase and an 8% decrease thereafter. Some of the estimates are statistically significant. I find no evidence to support the hypothesis that the low–income groups were more sensitive to the increases. However, the confidence intervals are wide across the study, suggesting that the design may be underpowered to detect small effects from zero. Moreover, the point estimates are surprisingly far from zero, which is especially surprising when upper income quintiles are concerned. Therefore, more evidence is needed to be able to make firm conclusions about the causal effects of policy changes.
  • Tolonen, Topias (2020)
    We consider a so-called principal-agent problem, where our aim is to construct an optimal contract that maximises utilities for the contractor, the principal, and for the effort-exerting party, the agent. In our setting, the time-horizon of the contract is infinite, and the agent receives a continuously paid compensation for exerting effort. Our main goal is to establish a problem introduced in Sannikov (2008) and characterise an optimal contract by restricting the menu of feasible contracts, an approach inspired by Cvitanic et. al. (2018) We begin with an extensive literature review, where we review the continuous-time principal-agent problems and further motivate the scope of this thesis. We start from the notable article Holmström et. al. (1987), and we progress towards the works Williams (2008), Sannikov (2008) and Cvitanic et. al. (2018) Following the review, we construct the problem and lay the mathematical foundations for it. We focus on a new benchmark setting, adapted from Sannikov (2008) and Cvitanic et. al. (2018). We first define the so-called controlled state equation, and construct the canonical probability space. Then, we introduce then impose few assumptions regarding the core concepts, and identify the problems of the agent and the principal and characterise their objective functions. After characterising the problem, we characterise the optimal contract and show that the optimal contract maximises the principal's profit. We characterise the difference function of Sannikov (2008) to the agent's optimisation problem, and then follow Cvitanic et. al. (2018) on the reduction of the problem. The reduction is done by restricting the possible menu of contracts and thus reduce the non-standard problem to a dynamic programming problem. We introduce the corresponding Hamiltonian functionals, together with the value functions to the both principal and the agent. Furthermore, we introduce a family restricted processes, which we show to characterise the optimal contract. We finish with showing that the optimal contract exists even with the notion of retirement. Having completed the main technical contribution, that is, having solved for the optimal contract, we briefly discuss the results and their implications against previous literature. Additionally, we discuss the possible extensions to our research.
  • Leinonen, Nea (2021)
    Earnings insurance is a measure of how much earnings of a worker change relative to idiosyncratic shock to firm performance. In case of full earnings insurance, shocks are not passed onto earnings at all and otherwise earnings insurance is partial. Linked employer-employee data is widely used in this recent empirical research interest due to its rich nature. This paper explores earnings insurance in Finland by focusing on three aspects: how much firms in Finland provide earnings insurance, are there differences in levels of earnings insurance between industries and whether partial earnings insurance is driven by hours worked or hourly wage. The main findings are that firms in Finland provide partial but substantial earnings insurance in all industries and that hours worked play a smaller role than hourly wage in determining partial earnings insurance. Because data on hours worked is not available in many countries, this paper is able to bring new insight on the role of hours worked and hourly wage in the composition of partial earnings insurance. However, hours worked and hourly wage explain only around half of partial earnings insurance, so the result should be considered with caution.
  • Wegelius, Aino (2022)
    This thesis studies what kind of strategic incentives a mechanism applied in Finnish college admissions in the fields of Business Administration and Economics (BAE) during 2015–2017 offers as well as how applicants respond to these incentives. A special type of a strategy that a student can only be hurt by and therefore strategically sophisticated students should try to avoid under the mechanism – referred to as the Priority Point Mechanism (PPM) – is characterised. Given this strategy, the thesis investigates whether the applicants’ behaviour is in line with some students responding to the incentives of the mechanism, and whether some students fail in responding to them. Using data on BAE applicants’ full Rank Order Lists (ROLs) and applying a First Differences approach, hypotheses associated with studying strategic behaviour are tested. The results are in line with some students strategizing under PPM: the removal of the priority points increases the probability of ranking the most prestigious programme first by 5.2 percentage points (p<0.001), and for the most prestigious programmes pairs, it increases the probability of ranking programmes with small expected cut-off differences by 5.5–12.7 percentage points (p<0.01). However, out of three programme pairs studied, for one pair the estimated effect is 2.1 and insignificant (p≈0.11). There is no evidence in favour of these behavioural changes translating into longer ROLs: the estimate is 0.069 more study programmes ranked when priority points are removed, and it is insignificant (p≈0.34). Students who fail in responding to the strategic incentives offered by PPM exist. During 2016 and 2017, 7–9 % of students submitted an ROL by which they could only be hurt, and in 2017, 2.8 % of students submitted an ROL which clearly demonstrates lack of strategic sophistication. Motivated by the result that students who make such mistakes exist, students who made a mistake are compared to those who didn’t. The results suggest that having more experience and lack of informational disadvantages don’t protect students from playing a strategy by which they can only be hurt, while these aspects seem to be negatively correlated with making a mistake that demonstrates lack of strategic sophistication. For both mistake types, making a mistake is associated with lower academic aptitudes. The finding that students’ behaviour is in line with some applicants strategically behaving under PPM has implications on whether true preferences should be inferred from stated preferences if stated under a manipulable mechanism. Furthermore, some students strategically behaving and some students failing in responding to the incentives can result in unfair allocations where some students justifiably envy others. In addition, factors such as luck, risk taking attitudes, confidence, and difficulties in predicting entry-thresholds may contribute to who ends up being selected. Therefore, given the importance of college admissions on young students’ future prospects, how applicants respond to the incentives of the mechanism applied and how that in turn impacts the fairness of the resulting allocation of students to colleges remain questions which deserve more research.
  • Laakso, Tomi (2022)
    Flash crashes are one of the most prominent market inefficiencies which are recognized in stock and index prices. They violate the hypothesis of efficient markets and affect the real economy as well. Their proper forecasting has not been possible with conventional methods due to their seeming rarity and extremity. Furthermore, they are difficult to detect from the noise of price processes. By augmenting the HAR model with company-specific news data aim is to improve volatility estimates on days when these extreme events occur. These days are first identified from the price processes by a novel statistical method called V-statistic, which detects statistically significant flash crashes. Data is every fulfilled trade for six stocks from New York Stock Exchange, and company-specific news flow data which is obtained from RavenPack News Analytics service. Both data sets cover the years 2014 to 2016. HAR model is estimated for all six stocks with and without the news data and the in-sample-model estimates are compared both on the full data and on the days when flash crashes occur. Results are ambiguous but they give slight signs that news data could be useful for improving volatility estimates on the days when flash crashes occur. Model volatility estimates are better on average when augmented with news data. Mean absolute percentage error is 0.1% smaller on average across all entities when augmenting the model with news data. However, there are differences across companies on how much and if at all news data improves models' performance. In conclusion, further work is needed to verify the usefulness of news data in forecasting flash crashes.
  • Mäkelä, Elias (2022)
    Apteekit muodostavat oleellisen osan suomalaista lääkehuoltoa. Apteekkien tehtävänä on huolehtia lääkehuollon tehokkaasta ja turvallisesta järjestämisestä. Apteekkimarkki-nan raskaan sääntelyn johdosta apteekeille muodostuu paikallisia monopoleja. Apteek-kien ja apteekkareiden verotuksen avulla pyritään kattamaan lääkehuollon kustannuksia sekä saavuttamaan tulonjaollisia tavoitteita. Tämä tutkielma pyrkii selvittämään harjoitta-vatko apteekit vero-optimointia ja millaisia taloudellisia vaikutuksia sillä on. Veroreaktioiden tutkimiseen käytetään usein verotuksen muutoksen luomaa luonnollista koeasetelmaa. Apteekkiverotuksessa ei ole tapahtunut merkittäviä muutoksia aineiston aikavälillä. Aineistona tutkielmassa käytetään FIMEAN apteekkien talousaineistoa vuosil-ta 2010-2018, yhdistettynä avoimesti saatavilla olevaan aineistoon apteekkien tiloissa toimivista erillisyhtiöistä. Apteekkien vero-optimointia lähestytään tarkastelemalla koko-naisverotuksen luomia verokannusteita. Tyyliteltyjä verokäyttäytymismalleja verrataan apteekkien ja apteekkareiden toteutuneeseen käyttäytymiseen. Veroja optimoivan käy-töksen sattumanvaraisuuden poissulkemiseksi käytetään logistista regressioanalyysiä. Tulosten perusteella apteekkareiden keskuudessa yleistyy systemaattinen vero-optimointia erillisyhtiöiden kautta tapahtuvalla tulonmuunnolla. Ylimpään tulodesiiliin kuuluvat apteekkarit saavuttavat näin lähes 15 prosenttiyksikön veroedun. Vero-optimoinnin todennäköisyyttä nostaa siirrettävissä olevat tulot sekä apteekkariin kohdis-tuva henkilöveroprosentti. Siirrettävien tulojen ylittäessä 200 tuhatta euroa ylittää erillis-yhtiön todennäköisyys 50 prosenttia. Verotulojen menetys on noussut, kun maksettujen yhteisöverojen kasvu huomioidaan, noin 15 miljoonaan euroa vuodessa. Tulokset ovat linjassa pienyritysten vero-optimoinnin empiirisen tutkimuskirjallisuuden kanssa. Aiheen jatkotutkimuksen kannalta olisi hyödyllistä yhdistää aineistoon veroaineisto henkilön tarkkuudella. Tarkastelu olisi mielenkiintoista laajentaa koskemaan kaikkia listaamatto-mia osakeyhtiöitä.
  • Togno, Francesca (2020)
    This thesis attempts to examine how much a representative consumer per each country is willing to pay to avoid global warming by analysing their welfare gains from having a smoother consumption path. Temperature variations affect economic activity, and consumption is subject to shocks related to global warming. I start by reviewing the economic literature that studied the relationship between temperatures and economic activity. I highlight which are the main effects on the economy that are correlated to rising temperatures and I review the methods that are usually employed by economists to assess environmental damages. I then take a sample of 163 countries and compute the welfare gains for each country for having a smoother consumption path, following the method used by Lucas (2003). To do this, I use country-level household consumption data and I set values for the risk aversion coefficient following the suggestions of the previous economic literature. I repeat the experiment with a smaller sample of 72 countries, this time using country-specific risk aversion coefficients retrieved from Gandelman and Hernández-Murillo (2015). In both cases, I obtain that most of the countries have welfare gains lying in the order of 10^-2 and 10^-3. Using annual temperature data, I test the Spearman correlation coefficient between welfare gains and average temperatures. Although the previous literature stressed the adverse effects of global warming on the economy, I find no significant correlation between these two variables. Countries that are more at risk do not display higher welfare gains than countries with a lower risk of imminent climate damages. To explain my results, I then consider determinants of risk aversion other than temperature and conclude that risk aversion, and consequently the value of welfare gains, can depend on several other factors.
  • Korhonen, Markus (2019)
    Hintavakauden saavuttamisesta on tullut keskuspankkien tärkeimpiä tehtäviä kaikkialla maailmassa, ja useat keskuspankit pyrkivät tiettyyn, hyvin määriteltyyn inflaatiotavoitteeseen. Samoin Euroopan keskuspankki pyrkii rahapolitiikallaan pitämään inflaation kahden prosentin tuntumassa. Inflaatiotavoite kuitenkin vaatii sen, että inflaatiota voidaan ennustaa mahdollisimman tarkasti. Koneoppimismetodeihin kuuluvat neuroverkkomallit ovat osoittautuneet olemaan monilla aloilla hyviä ennustemalleja. Inflaation ennustamisessa neuroverkkomallien tulokset ovat kuitenkin olleet ristiriitaisia. Aiempi tutkimus inflaation ennustamisesta on myös keskittynyt lähinnä Yhdysvaltojen ja muiden yksittäisten maiden inflaatioon. Tutkimusta ei ole myöskään tehty inflaation ennustamisesta eri suhdannetilanteissa neuroverkkomallien avulla. Tässä tutkielmassa tutkittiinkin neuroverkkomallin kykyä ennustaa inflaatiota koko euroalueella vuosien 2008-2009 taantuman aikana. Tutkielman aineistona käytettiin euroalueen harmonisoidusta kuluttajahintaindeksistä muodostettua inflaatioaikasarjaa vuosilta 1997-2010. Tutkielmassa epälineaarinen neuroverkko rakennettiin aiemmasta kirjallisuudesta vakiintuneella metodilla, jossa mallin valinta suoritettiin käyttämällä erillistä aineistoa. Valitulla mallilla simuloitiin aitoa ennustetilannetta käyttämällä euroalueen taantuman aikaista testiaineistoa. Ennusteet tehtiin myös taantuman jälkeiselle noususuhdanteelle, jotta eri suhdannetilanteita voitiin vertailla. Lisäksi samat ennusteet tehtiin ekonometriassa vakiintuneella lineaarisella mallilla, johon neuroverkkomallia verrattiin käyttämällä aiemmasta kirjallisuudesta tuttuja arviointikriteerejä ja tilastollisia testejä. Tutkielmassa selvisi, että neuroverkkomalli tuottaa hyvin tarkkoja ennusteita inflaatiolle kaikilla tutkielmassa käytetyillä ennusteväleillä. Neuroverkkomallin ennusteet ovat myös parempia, jos käytettävä aineisto on kausitasoitettu. Neuroverkkomalli tekee pienempiä ennustevirheitä noususuhdanteen aikana kuin taantumassa, mutta erot eri suhdannetilanteissa eivät ole kovin suuria. Neuroverkkomallin ennusteet eivät kuitenkaan poikkea yksinkertaisen lineaarisen mallin tekemistä ennusteista tilastollisesti merkitsevästi kummassakaan suhdannetilanteessa. Näin ollen neuroverkkomallin ei voida päätellä toimivan eri tavalla taloudellisessa taantumassa kuin muissa suhdannetilanteissa. Tutkielman tulosten perusteella neuroverkkomallia ei voida suositella keskuspankkien inflaatioennustemalliksi, koska mallin valinta ja testaaminen vievät yksinkertaista lineaarista mallia enemmän aikaa, mutta ennustetulokset eivät ole lineaarista mallia parempia. Tulokset antavatkin todisteita siitä, että inflaatio on euroalueella lineaarinen prosessi, jolloin epälineaariset mallit eivät tuota ennusteisiin lisähyötyä. Neuroverkkomallit voivat kuitenkin antaa hyvän työkalun keskuspankkien toiminnan arvioimiseen, koska niiden tuottamat ennusteet ovat tarkkoja pitemmillekin aikaväleille.
  • Holmberg, Daniel (2022)
    The LHC particle accelerator at CERN probes the elementary building blocks of matter by colliding protons at a center-of-mass energy of √s = 13 TeV. Collimated sprays of particles arise when quarks and gluons are produced at high energies, that are reconstructed from measured data and clustered together into jets. Accurate measurements of the energy of jets are paramount for sensitive particle physics analyses at the CMS experiment. Jet energy corrections are for that reason used to map measurements towards Monte Carlo simulated truth values, which are independent of detector response. The aim of this thesis is to improve upon the standard jet energy corrections by utilizing deep learning. Recent advancements on learning from point clouds in the machine learning community have been adopted in particle physics studies to improve jet flavor classification accuracy. This includes representing jet constituents as an unordered set, or a so-called “particle cloud”. Two highly performant models suitable for such data are the set-based Particle Flow Network and the graph-based ParticleNet. A natural next step in the advancement of jet energy corrections is to adopt a similar methodology, only changing the problem statement from classification to regression. The deep learning models developed in this work provide energy corrections that are generically applicable to differently flavored jets. Their performance is presented in the form of jet energy response resolution and reduction in flavor dependence. The models achieve state of the art performance for both metrics, significantly surpassing the standard corrections benchmark.
  • Hentunen, Saul (2022)
    Tonttien tilastollisilla hinta-arvioilla on käyttöä arvostuspohjaisen hintaindeksin rakentamisessa sekä suurien tonttikauppojen hintojen jaottelemisessa kohteilleen. Tämä tutkimus laajentaa aikaisempaa tutkimusta asuintonttien hinnoista tutkimalla liike- ja toimistotonttien hintoja. Tutkimuksessa selvitetään, poikkeaako toimitilatonttien hinnat asuintonttien hinnoista. Lisäksi selvitetään mallien hinta-arvioiden tarkkuutta tonttien hintojen mallintamisessa. Tutkimus toteutetaan Maamittauslaitoksen kauppahintarekisterillä, joka sisältää tietoja Suomessa tehdyistä kiinteistö- ja tonttikaupoista. Tutkimuksessa tuodaan esille rekisteriaineiston rajauksessa käytetyt ehdot sekä aineiston tietojen täydentämiseen käytetyt aineistot ja menetelmät. Tutkimuksessa esitellään yksityiskohtaisesti tonttien hinta-arvioiden laskemiseen käytettävät mallit. Tonttien hintoja mallinnetaan lineaarisella mallilla sekä koneoppimismetodilla tehostetulla regressiopuu-mallilla. Malleissa käytetyt selittävät muuttujat on valittu rekisteriaineistosta aikaisempaa tutkimusta apuna käyttäen. Rekisteriaineiston pohjalta on mahdollista koota useita tekijöitä, joilla voidaan arvioida tontin neliöhintaa. Mallien pohjalta ei voida kuitenkaan yksiselitteisesti sanoa, että liike- ja toimistotontit olisivat lähtökohtaisesti arvokkaampia kuin asuintontit. Poikkeavien tonttikauppojen poistamisen jälkeen koneoppimismetodilla tehostetulla regressiopuu-mallilla voidaan arvioida asuintonttien hintoja 15 prosentin tarkkuudella noin kolmannekselle tonteista. Liike- ja toimistotonteille vastaava tarkkuus saadaan noin kuudennekselle toimitilatonteista. Tutkimuksen tuloksena suositellaan, että tonttien hintoja mallinnetaan koneoppismetodein tehostetuilla regressiopuilla lineaarisen mallin sijasta. Mallin hinta-arvioiden tarkkuuden parantamiseksi suositellaan aineiston kasvattamista aikaväliä laajentamalla ja erityisesti liike- ja toimistotonttien määrän lisäämistä tutkimusaineistoon. Lisäksi suositellaan maapohjan laatutekijöiden tarkempaa tutkimista tutkimusaineiston tonteille.
  • Seppä, Meeri (2022)
    This thesis studies the effects of delayed discharge fees in Finland. Excessive lengths of hospital stays are a significant source of inefficacy in the health care system. Delayed discharge occurs when a patient who is medically fit to leave the hospital cannot do so for non-medical reasons. In Finland, several hospital districts have implemented financial fees to curb delayed discharges. As the fees were not adopted simultaneously everywhere, it provides a desirable research setting. I use a staggered difference-in-difference design to study how the delayed discharge fees reduce the length of hospital stays and the probability of urgent hospital readmission using patient-level data. I presume that the fees work as an incentive to increase the supply of post-acute care beds. Hence, the implementation of delayed discharge fees would lead to fewer delays and consequently shorter hospital days and early access to post-acute care. Previous literature suggests that there exists an inverse correlation between delayed discharges and the availability of post-acute care beds. In addition, there is evidence that health care providers react to financial incentives. However, the existing literature documents contradicting results on the effects of the delayed discharge fees. The chosen identification strategy does not yield valid results when using the length of stay as a dependent variable. My results suggest that the parallel trends assumption does not hold. The pre-treatment trends persist even after controlling for group-specific variables. I find that the delayed discharge fees reduce the probability of readmission for elderly hip-fracture patients. The effect is modest in size but increases over time. After six years from the implementation, the effect of the fees is -0.059 per cent. The classical two-period difference-in-difference model concludes that the decrease in probability associated with the delayed discharge fees is - 0.018 per cent. Although significant, the reduction in probability is small. Hence strong conclusions should be avoided. My results suggest that delayed discharge fees could have positive implications on patients’ health but that their effects be further studied.
  • Sahlström, Ellen (2022)
    This thesis investigates the within-school segregation of Finnish students using survey data on friendships. The difference in academic performance between students with and without immigrant backgrounds is large, and in order to make it smaller, the study environment for students with immigrant backgrounds must be understood. Identifying the extent to which students with immigrant background experience a different social environment is one step in that direction. Individual segregation is the extent to which the social network of an individual is composed of individuals similar to each other regarding some specific trait. This study investigates the existence of individual segregation in Finnish schools using information on who the fifth grade students participating in the study are friends with. The individual segregation level is calculated based on the background of friends, dividing students into two groups: students with or without immigrant backgrounds. This gives an indication of the possible segregation at an individual level, created through friend choice. Additionally, the correlation between individual level segregation and age at arrival to Finland and academic skills respectively is studied. Clear evidence of individual level segregation among immigrants is found. Students with immigrant backgrounds are more likely to have friends with immigrant backgrounds and more likely to be lonely, as in have no friends. However, neither correlation between in- dividual segregation and age at arrival nor correlation between individual segregation and academic skills can be found. This could be explained by problems with the data, but can also indicate that peer effects in class are smaller than what was expected based on previous research. It seems that also segregation patterns differ from what has been found in similar American studies. More research need to be done, but this thesis shows that students with immigrant backgrounds experience a different social environment when it comes to friends than students with non-immigrant backgrounds do, as the share of friends with im- migrant backgrounds is significantly higher for students who themselves have immigrant backgrounds.
  • Sirviö, Tom-Henrik (2021)
    During the recent decades corporate income tax rates have declined. This development may be due to several issues, such as changes in the political environment or increased knowledge of behavioral effects of corporate taxation. One prominent explanation is tax competition. Tax competition is defined as noncooperative tax regime setting by independent governments such that the tax policy decisions affect the allocation of mobile tax base(s) among different countries. Tax regimes include the statutory tax rates as well as the tax bases and other parts of corporate tax systems. Tax competition is an important phenomenon for multiple reasons. Lower tax rates may imply issues in financing the public sector. On the other hand, it makes some of the issues of the international corporate tax system visible. This thesis reviews theories of tax competition. The aim is to provide an analysis of the different aspects of tax competition and review how different institutional structures are modelled in tax competition framework. Studying the main implications of tax competition is an important part of the thesis. The first formal models of tax competition consider a world economy which consists of many identical countries. Firms maximize profits and governments maximize utility of the representative agent. It is shown that tax competition drives tax rates to an inefficiently low level. Later tax competition research develops both the institutional set -up of tax competition and modelling frameworks applied to study it. The results of this thesis imply that the result of the first tax competition models is strong. Tax competition drives corporate tax rates to an inefficiently low level even if the key assumptions are relaxed. On the other hand, some important extensions imply that corporate income tax rates are in an optimal level even in presence of tax competition. However, in most cases tax competition is harmful since it drives corporate income tax rates to inefficiently low levels. On the other hand, tax competition literature also provides other results. Considering sequential tax competition instead of simultaneous shows that tax rates may in equilibrium be higher if one country acts as a leader. This may be one explanation why tax rates have not declined to zero as the famous race to the bottom hypothesis suggests. Tax competition literature also provides analysis on the effect of tax havens. These results are, however, more confusing. Tax competition can decrease welfare or be welfare improving. This thesis reviews important aspects of tax competition literature and focuses on the institutional set-up of theories of tax competition. There remain some gaps in the literature. Some institutional set-ups are not analysed in the tax competition context. On the other hand, literature focusing on the empirics of tax competition is scarce. One of the important aspects of tax competition is how to limit it. This issue has a great amount of current attention in the work of OECD, for example.
  • Holvio, Anna (2021)
    Whereas primary school enrolment has grown to be nearly universal on a global scale, learning results have not kept up with the rapidly expanding systems. This is particularly true in Mozambique, where fourth-grade students lack basic skills of literacy and numeracy. Research has established that teacher quality has a large effect on student achievement. Out of the observable teacher characteristics, teacher content knowledge has most consistently been found to have a positive impact on student achievement. This study seeks to answer how large a causal impact teacher content knowledge has on student achievement in Mozambican primary schools. The data for this study come from a Service Delivery Indicator survey in Mozambique from 2014. They include assessments of fourth-grade students and their teachers in math and Portuguese, and are nationally representative. The empirical analysis exploits within-student across-subject variation. This allows to introduce not only student fixed effects, but also teacher fixed effects into the model, because all students in the sample are taught by a same teacher in both subjects, therefore strengthening the causal identification. First-differencing is then used to derive the estimable equation, which explains student achievement by teacher content knowledge only. The main results suggest that teacher content knowledge in math and Portuguese does not have a statistically significant impact on student achievement. However, further analyses show that there is considerable heterogeneity in the results. This is not unexpected, as Mozambique itself is a rather heterogenous country with large contrasts. Increasing teacher content knowledge by 1 SD increases student achievement by 0.14 SD among students with Portuguese as their first language, and by 0.13 SD among students in urban schools. Increasing the content knowledge of teachers whose knowledge is above the median also increases the achievement of students whose knowledge is above the median by over 0.12 SD. Based on the results, it is plausible that students’ poor knowledge of Portuguese is a fundamental problem for their learning, and something that should be prioritised. This could be done by improving language education at the earlier grades, or by expanding bilingual education, for instance. Because students with their knowledge below the median are unaffected by teacher content knowledge, this suggests that teaching is perhaps targeted to the more advanced students, and those who have already fallen behind benefit very little from it.