Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by discipline "Economics"

Sort by: Order: Results:

  • Feumo, Ludovic Christian (2016)
    I have introduced the notion of absorptive capacity into an innovation-driven growth model. The model features firms with heterogeneous size and innovation capability. The economy`s aggregate output growth rate is driven by the growth of the productivity index of firm`s intermediate goods. There are firms already operating at least one active product line and potential entrants not owning a product line, but engaged in research in order to innovate and enter the economy. Each firm engages in R&D activities to improve upon existing intermediate good or to discover a completely new one. An incumbent firm may exit the economy exogenously and more importantly endogenously due to creative destruction or due to obsolescence. The consideration of the firm`sabsorptive capacity in Cohen and Levinthal`s sense shaped the specifications of the innovation production functions for entrants and incumbents. I have set a proxy for the firm`s absorptive capacity that considers the quality level of active product lines owned by the firm, their number and their closeness. Then, incumbent firm`s absorptive capacity confers to the it an innovation efficiency advantage compared to potential entrant, in improving upon its own product lines or in innovating in a product line not existing in its portfolio. For potential entrants, the average quality level of intermediate good in the economy acts as an extra difficulty they must overcome since the did not participate in building that economy`s average level of quality, and lack then the absorptive capacity that an average incumbent firm in the economy would possess. Although the general structure of the equilibrium growth rate obtained is the same as in the basis model (Acemoglu et al. 2013), however, the content of incumbents` rate of innovation and the rate of entry for potential entrants is different and more likely to deliver different values in equilibrium. This framework may be useful in studying empirically the effects of R&D subsidies on economic growth and lead to different results obtained by the main reference article, that found that subsidies to incumbents`R&D are pervasive for the growth of the economy.
  • Saarinen, Lauri (2014)
    The thesis examines house price formation in the Helsinki metropolitan area. Especially the price appreciation following financial liberalisation of the late 1980’s and the subsequent price decline of the early 1990’s recession mark the development of house prices. During the 2000’s house prices have increased rapidly with the exception of the slump during the financial crisis. This thesis focuses on explaining the aforementioned development with emphasis on the long-run aspect in both theoretical and empirical examinations. The primary goal is in studying long-run interdependence between house prices and fundamental determinants mentioned in theoretical and empirical literature. Based on the achieved results it is possible to draw conclusions on the sustainability of the price level as well as study the effects of various fundamentals on the metropolitan area price level. The thesis is separated into a theoretical and an empirical section which makes use of econometric methods in modelling house prices. The long-run relationship between house prices and selected fundamental variables is examined using cointegration analysis. The fundamentals and house prices are modelled in a vector error correction framework central to cointegration analysis. Alongside house prices, household disposable income, mortgage interest rates, metropolitan area total net migration and the stock of housing loans describing household indebtedness are introduced into the system. The quarterly data are compiled from Statistics Finland and Bank of Finland databases for the time period 1983–2012. The central result of the thesis is a long-run equilibrium model between house prices and the fundamental determinants. The model is found to work satisfactorily as the results accord with theory and the results are statistically significant. In addition, the results are in line with previous empirical studies conducted in Finland. Furthermore it is discovered that mortgage interest rates, household indebtedness and migration patterns have been notable factors in determining house prices, especially towards the end of the examination period. The achieved results on short-run dynamics also provide support to the estimated long-run model. A key finding considering the short-run dynamics is the sluggish adjustment of house prices towards their long-run level. Based on the results of this thesis, house prices in the Helsinki metropolitan area have exceeded the estimated long-run equilibrium price level for a prolonged period. This phenomenon can be explained by demand side factors including high net migration to the region as well as low mortgage rates encouraging mortgage lending. On the other hand, inelastic supply and scarcity of land specific to urban areas restrain the rapid unravelling of excess demand in the housing market. It is thus possible, that in the future house prices will adjust downward toward their long-run equilibrium level.
  • Beniard, Henry (2010)
    This thesis researches empirically, whether variables that are able to reliably predict Finnish economic activity can be found. The aim of this thesis is to find and combine several variables with predictive ability into a composite leading indicator of the Finnish economy. The target variable it attempts to predict, and thus the measure of the business cycle used, is Finnish industrial production growth. Different economic theories suggest several potential predictor variables in categories, such as consumption data, data on orders in industry, survey data, interest rates and stock price indices. Reviewing a large amount of empirical literature on economic forecasting, it is found that particularly interest rate spreads, such as the term spread on government bonds, have been useful predictors of future economic growth. However, the literature surveyed suggests that the variables found to be good predictors seem to differ depending on the economy being forecast, the model used and the forecast horizon. Based on the literature reviewed, a pool of over a hundred candidate variables is gathered. A procedure, involving both in-sample and pseudo out-of-sample forecast methods, is then developed to find the variables with the best predictive ability from this set. This procedure yields a composite leading indicator of the Finnish economy comprising of seven component series. These series are very much in line with the types of variables found useful in previous empirical research. When using the developed composite leading indicator to forecast in a sample from 2007 to 2009, a time span including the latest recession, its forecasting ability is far poorer. The same occurs when forecasting a real-time data set. It would seem, however, that individual very large forecast errors are the main reason for the poor performance of the composite leading indicator in these forecast exercises. The findings in this thesis suggest several developments to the methods adopted in order to produce more accurate forecasts. Other intriguing topics for further research are also explored.
  • Helander, Aleksi (2011)
    An extensive electricity transmission network facilitates electricity trading between Finland, Sweden, Norway and Denmark. Currently most of the areas power generation is traded at NordPool, where the trading volumes have steadily increased since the early 1990s, when the exchange was founded. The Nordic electricity is expected to follow the current trend and further integrate with the other European electricity markets. Hydro power is the source for roughly a half of the supply in the Nordic electricity market and most of the hydro is generated in Norway. The dominating role of hydro power distinguishes the Nordic electricity market from most of the other market places. Production of hydro power varies mainly due to hydro reservoirs and demand for electricity. Hydro reservoirs are affected by water inflows that differ each year. The hydro reservoirs explain remarkably the behaviour of the Nordic electricity markets. Therefore among others, Kauppi and Liski (2008) have developed a model that analyzes the behaviour of the markets using hydro reservoirs as explanatory factors. Their model includes, for example, welfare loss due to socially suboptimal hydro reservoir usage, socially optimal electricity price, hydro reservoir storage and thermal reservoir storage; that are referred as outcomes. However, the model does not explain the real market condition but rather an ideal situation. In the model the market is controlled by one agent, i.e. one agent controls all the power generation reserves; it is referred to as a socially optimal strategy. Article by Kauppi and Liski (2008) includes an assumption where an individual agent has a certain fraction of market power, e.g. 20 % or 30 %. In order to maintain the focus of this thesis, this part of their paper is omitted. The goal of this thesis is two-fold. Firstly we expand the results from the socially optimal strategy for years 2006-08, as the earlier study finishes in 2005. The second objective is to improve on the methods from the previous study. This thesis results several outcomes (SPOT-price and welfare loss, etc.) due to socially optimal actions. Welfare loss is interesting as it describes the inefficiency of the market. SPOT-price is an important output for the market participants as it often has an effect on end users electricity bills. Another function is to modify and try to improve the model by means of using more accurate input data, e.g. by considering pollution trade rights effect on input data. After modifications to the model, new welfare losses are calculated and compared with the same results before the modifications. The hydro reservoir has the higher explanatory significance in the model followed by thermal power. In Nordic markets, thermal power reserves are mostly nuclear power and other thermal sources (coal, natural gas, oil, peat). It can be argued that hydro and thermal reservoirs determine electricity supply. Roughly speaking, the model takes into account electricity demand and supply, and several parameters related to them (water inflow, oil price, etc.), yielding finally the socially optimal outcomes. The author of this thesis is not aware of any similar model being tested before. There have been some other studies that are close to the Kauppi and Liski (2008) model, but those have a somewhat different focus. For example, a specific feature in the model is the focus on long-run capacity usage that differs from the previous studies on short-run market power. The closest study to the model is from Californias wholesale electricity markets that, however, uses different methodology. Work is constructed as follows.
  • Maijanen, Ville (2013)
    Pro gradu on suoraa jatkoa kandidaatintutkielmalle, jonka tutkimustulosten mukaan metropolialueella on yhteensä noin 63 000 aseiden hallussapitoon oikeutettua henkilöä sekä noin 170 000 ampuma-asetta. Lukuihin perustuen voitiin todeta, että ampumaratojen määrä ja saavutettavuus eivät ole missään suhteessa riittävät ampuma-aseen hallussapitoon oikeutettujen henkilöiden ja aseiden lukumäärään nähden tällä alueella. Pro gradu -tutkimus pyrkii tuottamaan uutta lisätietoa metropolialueen ampumaratakeskusten saavutettavuudesta, niiden kilpailullisista vaikutusalueista ja suhteellisista sijainneista, jotta ampumaratakeskuksia koskevia sijainti-, investointi- ja kehittämispäätöksiä voidaan tehdä riittävin tiedollisin perustein. Potentiaalisen käyttäjäkunnan sijainti- ja saavutettavuustiedot tuottavat osaltaan perusteita ampumaratakeskusten kysynnän arvioinnille. Tutkimus tarkastelee saavutettavuuden teoriataustaa sekä palvelukohteen maantieteellistä saavutettavuutta useasta eri näkökulmasta ja kehittää tarkoitukseen soveltuvia uusia teknisiä arviointimenetelmiä. Tutkimuksen empiirisessä osassa tarkastellaan metropolialueen ampumaratakeskusten eri maksimimatka-aikoihin perustuvia saavutettavuusvyöhykkeitä sekä väestön ja potentiaalisen käyttäjäkunnan vastaavia saavutettavuuskertymiä ja vertaillaan näitä toisiinsa. Tämän lisäksi vertaillaan tutkimusta varten kehitettyjä, kunkin ampumaratakeskuksen ulkoisiin käytettävyysrajoitteisiin perustuvaa kilpailullista saavutettavuutta ja suhteellista sijaintia toisiinsa nähden. Ampumaratakeskusten käytettävyyskerroin eli yhtäaikainen käyttäjäkapasiteetti, vuotuiset käyttöajat ja ampumaratavalikoiman monipuolisuus otetaan mallissa huomioon asiakkaiden palvelukohteen valinnan päätösperusteena matkakustannusten lisäksi. Käytettävyyskertoimen tavoitteena on kuvata ampumaratakeskuksen käyttäjilleen tuottamaa hyötyä. Tutkimuksessa hyödynnetään valtakunnallisen aseluparekisterin tietoja aseluvanhaltijoiden sekä ampuma-aseiden määristä, laadusta ja sijainnista kunnittain metropolialueella. Karttaesitysten toteutuksessa on käytetty Maanmittauslaitoksen Kuntajako-aineistoa, sekä Tilastokeskuksen Ruututietokanta 2012 -väestöaineistoa. Lisäksi on muodostettu erillinen saavutettavuusaineisto laskemalla todelliset matka-ajat ja maantie-etäisyydet jokaisesta karttaruudusta jokaiseen ampumaratakeskukseen. Tutkimustulosten julkista, ajasta ja paikasta riippumatonta raportointia ja jakamista varten on tutkimukselle rakennettu internet-julkaisualusta osoitteessa www.saunalahti.fi/villemai/. Tarkastellut viisi ampumaratakeskusta muodostavat tutkimuksen metropolialueen kokonaisuuden. Ampumaratakeskusten verkosto muodostuu keskusten ja niiden potentiaalisen käyttäjäkunnan sijainneista sekä näiden välisistä matkakustannuksista ja eri keskusten tarjoamasta käytettävyydestä. Tätä kokonaisuutta on nyt ensimmäistä kertaa mahdollista tarkastella matemaattisten saavutettavuusmallien avulla laskettujen tulosten pohjalta. Tutkimuksella on pystytty tuottamaan eri näkökulmista tärkeää uutta ja ennen julkaisematonta tietoa metropolialueen ampumarataverkostosta. Uusimpaan saavutettavuusteoriaan perustuen tutkimus käsittelee kaikkia neljää saavutettavuuden eri komponenttia. Maankäytön komponentti huomioidaan sekä potentiaalisten asiakkaiden että ampumaratakeskusten määrän, laadun ja sijainnin suhteen. Näiden lisäksi huomioidaan kysynnän ja tarjonnan kohtaaminen kapasiteettirajoitusten vallitessa. Liikenteellisen komponentin matkakustannukset otetaan huomioon maantie-etäisyyksien ja ajoaikojen muodossa. Ajallinen komponentti sisältää ampumaratakeskusten aukioloaikojen aiheuttamat rajoitteet saavutettavuudelle. Yksilöllistä komponenttia tarkastellaan virka-ajan asettamina ajankäytön rajoitteina. Tutkimustulokset osoittavat kiistatta Helsingin Kivikon urheilupuistoon suunnitellun suursisähallin vahvimmaksi ampumaratakeskukseksi nyt vertailtujen olemassa olevien ja suunniteltujen keskusten joukossa metropolialueella. Tähän on monia syitä, joista tärkeimmät ovat keskuksen paras sijainti suhteessa sen potentiaalisiin käyttäjiin ja tieverkkoon sekä paras tehokkuus eli hyötykerroin laajaan ympärivuotiseen aukiolomahdollisuuteen perustuen. Ampumarataverkostoon kohdistuvat laajennukset sekä olemassa olevien kohteiden edelleen kehittäminen on metropolialueella välttämätöntä. Uusinvestoinnit ja olemassa olevien keskusten kehitystoimet on syytä asettaa tärkeysjärjestykseen hankkeiden tuottaman tehokkuuden ja vastaavasti lisääntyvien käyttöhyötyjen perusteella. Väistämättä rajallinen rahoitus on syytä kohdistaa edellä mainituin perustein, koko metropolialueen tasapainoinen maantieteellinen saavutettavuus huomioiden.
  • Asikainen, Juha (2018)
    The thesis handles application of principal component analysis (PCA) in momentum based investment strategies. Principal component analysis is a dimension reduction method for multidimensional datasets that seeks to find new mutually uncorrelated variables called principal components so that the explanatory power over the original dataset is maximized. Momentum strategies are long-short, zero investment portfolios that within an asset class buy instruments that have performed relatively well and sell short instruments that had weak performance. The performance is measured in by directly applying total returns or using some derivative of it. The evaluation horizons typically range for a few months to a year. Earlier studies have assessed total returns eg in relation to fundamental factor models. Main data source of the study is total return data for US equity spanning ca 30 years of recent history. The strategies are defined using monthly data. The principal component models are estimated using daily return series. These models are then utilised in two types of momentum strategies. First group uses residuals from PCA, i.e. the portion of return not explained by the principal components. Second type of strategies allocates money based on the returns of principal components. The strategies result in time return series of return that would have been achieved by investing according to the rule based strategies. These are then analysed by using statistical and econometric methods. Furthermore, the returns are compared to results obtained in prior studies that have utilized similar methods. This includes studying the effect of autocorrelations in total returns, residuals and principal components for the success of respective strategies. The results in indicate that both sets of strategies seem to generate absolute returns that are in line with those obtained using the raw total return signal. The volatility of these returns as measured by standard deviation is significantly lower than that of strategies based on total return. Earlier studies using residuals defined using different types of models have shown similar results. The results of residuals based strategies are not explained by the autocorrelations of the underlying instruments. In fact these autocorrelations would seem to distract from the returns. The principal components, on the other hand, seem to have positive autocorrelations which in large part explain the success of related strategies. The key finding of the thesis is the attribution of decomposition of momentum profits into distinct sources by applying the relatively simple and well-known method of principal component analysis. These results complement the earlier research regarding the split of momentum profits between systematic and non-systematic sources of variation in finding that both are significant. The residuals based strategies have a higher economic significance due their low correlations against conventional strategies. The analysis of autocorrelations points to differing econometric drivers between the two sets of strategies.
  • Wichmann, Ira Anna Katariina (2011)
    Modern-day economics is increasingly biased towards believing that institutions matter for growth, an argument that has been further enforced by the recent economic crisis. There is also a wide consensus on what these growth-promoting institutions should look like, and countries are periodically ranked depending on how their institutional structure compares with the best-practice institutions, mostly in place in the developing world. In this paper, it is argued that 'non-desirable' or 'second-best' institutions can be beneficial for fostering investment and thus providing a starting point for sustained growth, and that what matters is the appropriateness of institutions to the economy’s distance to the frontier or current phase of development. Anecdotal evidence from Japan and South-Korea is used as a motivation for studying the subject and a model is presented to describe this phenomenon. In the model, the rigidity or non-rigidity of the institutions is described by entrepreneurial selection. It is assumed that entrepreneurs are the ones taking part in the imitation and innovation of technologies, and that decisions on whether or not their projects are refinanced comes from capitalists. The capitalists in turn have no entrepreneurial skills and act merely as financers of projects. The model has two periods, and two kinds of entrepreneurs: those with high skills and those with low skills. The society’s choice of whether an imitation or innovation – based strategy is chosen is modeled as the trade-off between refinancing a low-skill entrepreneur or investing in the selection of the entrepreneurs resulting in a larger fraction of high-skill entrepreneurs with the ability to innovate but less total investment. Finally, a real-world example from India is presented as an initial attempt to test the theory. The data from the example is not included in this paper. It is noted that the model may be lacking explanatory power due to difficulties in testing the predictions, but that this should not be seen as a reason to disregard the theory – the solution might lie in developing better tools, not better just better theories. The conclusion presented is that institutions do matter. There is no one-size-fits-all-solution when it comes to institutional arrangements in different countries, and developing countries should be given space to develop their own institutional structures that cater to their specific needs.
  • Rasijeff, Moona (2017)
    Traditional economics theory argues that competitive markets are unable to enhance innovation incentives. This is based on the claim that it is not possible for an inventor to earn profit for his invention in the face of unlimited imitation. Traditional theory calls for intellectual property for innovators, such as patents, to guarantee effective innovation production. The economic safety provided by patents creates innovation incentives, enhancing research levels and product quality. However, the increasing popularity of patents combined with the current, extensive patenting systems can create economic inefficiencies. The monopolistic competition arising from patent-provided rights may weaken innovation incentives in small as well as quickly developing industries. In addition, derived high prices and legal barriers to entry granted by intellectual property can distort competition and may even suppress the patent holder’s innovation tactics. Traditional economic theory also fails to explain why firms may choose not to utilise formal intellectual property in favour of informal protection methods which hold high importance to firms, such as secrecy, high wages, and increasing production complexity. My thesis examines whether competitive markets are able to enhance innovation incentives, and if so, under what conditions. I will also aim to enlighten why firms may favour other protective measures for their inventions over patents. Henry and Ponce (2011) and Henry and Ruiz-Aliseda (2015) expand our understanding of these topics in the form of game theory. A key factor in our analysis is the assumption that free spillovers are non-existent. Instead, endogenous knowledge must be purchased. As a result, potential imitators prefer to wait for the cost of knowledge to decrease, and delay their market entry. Such a delay, and the possibility to participate in knowledge trading, secure positive rents for the inventor, which compensates his innovation costs. These results are achieved even in unfavourable circumstances, where competition is high or the inventor incurs additional costs to protect their invention by other means than patenting. In fact, in such circumstances, we expect the inventor’s profits to approach monopolistic profits. In conclusion, competitive markets attain efficiency and improved levels of social welfare, and therefore, innovation levels can persist in the market. These results are nonetheless sensitive in relation to the elasticity of demand. Market imperfections such as the indivisibility of an idea, moral hazard and adverse selection pose additional problems in the modelling within a competitive market. My thesis weighs in on the ever-crucial patenting debate: Is the modern, extensive patent system obsolete? High prices and risen monopolistic competition can lead to severe consequences on social welfare, and limit the exchange of knowledge. New innovation models in competitive markets and its ability to encourage innovation incentives pose an important argument against the expansion of the current patent system. These models also provide an explanation for the popularity of informal protection methods for innovations, emphasising the value of firm strategizing above legal procedures. Such patent criticism and empirical evidence could be utilised in the development of the current patent system.
  • Acharya, Abha (2014)
    This thesis examines the potential fungibility of foreign assistance to the Government of Nepal using two methods: an econometric model and a modified ORANI-G, a Computable General Equilibrium (CGE) model. I use the econometric model to corroborate the findings of the CGE model and to determine whether such a model can produce credible empirical evidence on aid fungibility. Both models indicate presence of general and categorical fungibility, and non-additionality in the use of aid in Nepal. I begin with a partial equilibrium econometric model to estimate government expenditures using a Seemingly Unrelated Regression (SUR). At the sectoral level, categorical aid is prone to reshuffling generating overall negative development investments in most of the sectors. At the aggregate level, a unit of aid produces a meagre 0.33 units of additional development expenditures in the Nepalese government budget. In addition, aid partially finances non-development expenditures, but only slightly enhances the governments’ own revenue effort. Next, I utilize the ORANI-G model with a Klein-Rubin functional form for government behaviour, rigidities in the labour market, and some additional parameters to study aid fungibility in Nepal. This produces results that are analogous to the econometric model. Foreign assistance to Nepal exhibits a high level of general and categorical fungibility, with an insignificant increase in revenue collection. Overall, a unit of aid stimulates only a 0.45 units of additional development expenditures in the Nepalese government budget. In using the CGE model to study fungibility, this thesis develops a new method of analyzing the research question whereas, previous studies use models in a partial equilibrium setting, failing to account for decision-making processes of the government. This thesis is an attempt to expand on the existing literature by introducing CGE models in the study of aid fungibility and to motivate further study into fungibility using CGE modelling.
  • Soininvaara, Ohto (2018)
    Toimeentulotuki on tarkoitettu viimesijaiseksi sosiaaliturvaksi tilanteissa, joissa muut tulonlähteet eivät takaa henkilön minimitoimeentuloa. Se on vakiintunut osaksi suomalaista sosiaaliturvajärjestelmää ensisijaisten etuuksien heikennyttyä. Asumistukia suuremmat asumiskustannukset ovat eräs toimeentulotuen tarvetta selittävä tekijä, etenkin Helsingissä, jossa asumiskustannukset ovat korkeat. Tässä tutkielmassa tutkitaan helsinkiläisten keski- ja korkea-asteen opiskelijoiden asumisvalintoja, asumiskustannuksia ja niiden yhteyttä toimeentulotuen asiakkuuteen. Tavoite on selvittää, miltä osin asumiskustannukset selittävät toimeentulotuen asiakkuuksia, sekä mitkä tekijät altistavat jotkut pienituloiset kotitaloudet muita korkeammille asumiskustannuksille. Lisäksi tutkitaan opiskelijoiden toimeentulotuen käyttöä yleisesti. Tutkimus kattaa vuodet 2008–2010. Toimeentulotuen käyttö kasvoi tutkimusajankohtana merkittävästi koko Suomessa ja tämä näkyy myös lisääntyneenä opiskelijoiden toimeentulotuen käyttönä. Työssä käytettävä tutkimusaineisto on yhdistetty Helsingin kaupungin sosiaalitoimen toimeentulotuen maksurekisteristä, opintotuen maksurekisteristä, väestötietorekisteristä sekä verottajan verotiedoista. Aineisto on helsinkiläisten osalta kokonaisaineisto, eli se kattaa kaikki rekistereissä olevat henkilöt ja maksut. Tutkimus keskittyy opintotuen asumislisän saajiin, eli käytännössä vuokralla asuviin. Opiskelijoiden toimeentulon ja asumisen tunnuslukuja käsitellään ensin yleistasolla. Sen jälkeen hyödynnetään regressiomalleja, joilla tutkitaan yhteyksiä opiskelijoiden taustaominaisuuksien, toimeentulotuen saamisen ja vuokran välillä. Aineisto ja käytetyt menetelmät eivät mahdollista kausaalitulkintojen tekemistä, joten regressioanalyysi on tältä osin luonteeltaan kuvaileva. Toimeentulotukea saaneiden opiskelijoiden havaitaan maksaneen keskimäärin korkeampaa vuokraa kuin muut opiskelijat. Yhteys erottuu sekä absoluuttisesti, että tutkimusjoukon jäsenten taustaominaisuuksia vakioimalla. Vuokra- ja tulotaso ovat käytössä olleista muuttujista merkittävimmät selittäjät toimeentulotuen asiakkuudelle. Tämä heijastelee tuen myöntöperusteita. Toimeentulotuen asiakkaiksi on siis päätynyt erityisesti pienituloisia opiskelijoita, joilla asumiskustannukset ovat suuret. Tulotason vaihtelussa erityisesti ilman kesätöitä jääminen näyttää ajavan opiskelijoita toimeentulotuen asiakkaiksi. Kesäkuukaudet poikkeavat muusta vuodesta, sillä opintotukea ei pääosin silloin myönnetä. Kesäkuukaudet erottuvatkin selvästi opiskelijoiden toimeentulotuen käytön yleisyydessä. Toimeentulotukea kevät- tai syyslukukausien aikana saaneista yli puolet on ammatillisten oppilaitosten opiskelijoita. Vain kesäkuukausina tukea saaneista puolestaan enemmistö opiskeli ammattikorkeakouluissa tai yliopistoissa. Yksilöllisten preferenssien lisäksi opiskelijoiden asumiskustannuksissa voi esiintyä vaihtelua useasta syystä. Taloustieteellisen kirjallisuuden perusteella vuokrataso voi olla väliaikaisesti optimaalisen tason yläpuolella mm. asunnon vaihtamiseen liittyvien kustannusten tai huonomman vuokramarkkina-aseman vuoksi. Markkinahintaa alhaisemman vuokratasonsa vuoksi opiskelija-asunnot näyttävät alentaneen tarvetta toimeentulotuelle kaikilla oppilaitosasteilla, mutta niiden saatavuus on rajallista. Yksinasuminen ilmenee kolmantena mahdollisena selitystekijänä sille, mikä on aiheuttanut toimeentulotuen asiakkaiden muita keskimäärin korkeammat asumiskustannukset. Näiden lisäksi mahdollinen selitys on, että toimeentulotuki valuu osittain vuokranantajille. Toisin kuin matala opintotuen asumislisä, toimeentulotuki ei välttämättä myöskään kannusta etsimään edullisinta asuntoa. Toimeentulotuen kohtaantovaikutuksiin ei kuitenkaan tutkimuksen pohjalta voida ottaa kantaa.
  • Honka, Joona (2017)
    Tutkimus käsittelee asuntojen hintojen muodostumista pääkaupunkiseudulla vuoden 2016 asuntokauppa-aineiston avulla. Hintoja tarkastellaan aluksi koko pääkaupunkiseudun tasolla, jonka jälkeen siirrytään vertailemaan hintojen muodostumista kaupunkien välillä. Pääkaupunkiseudulla on Helsingin keskustan lisäksi useita pienempiä kaupunkikeskuksia, joten asunnot on jaettu lähimmän kaupunkikeskuksen mukaan alueisiin. Etäisyysmuuttujat lähimpään kaupunkikeskukseen ja Helsingin keskustaan on luotu julkisen liikenteen ja henkilöauton matka-ajan keskiarvoa käyttäen. Etäisyys kaupunkikeskukseen antaa koko pääkaupunkiseudun tasolla ristiriitaisia tuloksia, joten tarkastelin, kuinka eri kaupunkikeskukset ja niiden etäisyys vaikuttaa asuntojen hintoihin. Tulokset osoittavat, että kaupunkikeskuksen laatua voidaan tarkastella etäisyysmuuttujan kertoimen perusteella, sillä parempien kaupunkikeskuksien läheisyydessä asunnot ovat kalliimpia ja hinnat laskevat etäisyyden kasvaessa keskustaan. Heikommissa kaupunkikeskuksissa tilanne on päinvastainen. Syitä tälle ovat kaupunkikeskuksien positiivisten ja negatiivisten vaikutusten nettoarvo. Löysin kuitenkin merkittävän havainnon sosioekonomisista muuttujista. Teorian mukaan ihmiset muuttavat asuinalueelle, jossa asuu heidän itsensä kaltaisia ihmisiä. Muodostin sosioekonomisista muuttujista eli tuloista, työttömyysasteesta ja koulutusasteesta mittarin, joka kuvastaa kaupunkikeskuksen sosioekonomista sijoitusta. Tulosten perusteella Tapiola ja Helsingin keskus ovat korkeimmilla sijoilla, kun taas Koivukylä ja Hakunila ovat viimeisillä sijoilla. Lähimmän kaupunkikeskuksen etäisyysmuuttujan kerroin korreloi erittäin voimakkaasti sosioekonomisen sijoituksen kanssa. Muutamia poikkeuksia selittivät keskustan ulkopuolelle rakennetut uudet asunnot, jolloin rakennusvuoden merkitys on näissä tapauksissa sosioekonomista asemaa voimakkaampi. Helsingin keskustan etäisyyden merkitys on teorian ja tulosten perusteella erittäin suuri. Tuloksien perusteella ydinkeskustan välittömässä läheisyydessä asuntojen hinnat ovat noin 60 prosenttia kalliimpia kuin yli 45 minuutin etäisyydellä. Tuloksien perusteella Helsingin keskustaan 1990-luvulla liittyneet negatiiviset ulkoisvaikutukset ovat vähentyneet tai mieltymykset ydinkeskustassa asumiseen ovat muuttuneet, koska ydinkeskusta-asuntojen arvostus on kasvanut suhteellisesti eniten pääkaupunkiseudulla 1990-luvulta. Rakennusvuoden vaikutus on myös erittäin merkittävä asunnon hintaan. Hinnan ja rakennusvuoden välinen suhde ei ole lineaarinen vaan muistuttaa enemmän U-muotoa, koska vanhimmat ja uusimmat asunnot ovat kaikkein kalleimpia. Tulosten perusteella hinnat ovat matalimmat 1960- ja 1970-luvuilla rakennetuissa asunnoissa. Vuonna 1997 tehdyn tutkimuksen mukaan matalimmat hinnat vastaavalla aineistolla löytyivät 1940-luvulta, vaikka 1960- ja 1970-luvun asuntoja pidettiin jo silloin heikkolaatuisina ja arkkitehtuurisesti merkityksettöminä. Teorian mukaan 1940-luvun asuntojen hinta selittyi rakennusmateriaalien heikolla laadulla, ihmisten mieltymyksillä ja arkkitehtuurilla. Aikakauden asuntojen arvostus on kasvanut, mutta tulosten perusteella suurin selittävä tekijä löytyy suurista peruskorjauksista ja putkiremonteista. Putkiremontti tehdään keskimäärin 50 vuoden välein, jolloin 1990-luvulla tehdyn tutkimuksen ajankohtana putkiremontit kohdistuivat juuri 1940-luvulla rakennettuihin asuntoihin ja tällä hetkellä remontit kohdistuvat 1960- ja 1970-luvuilla rakennettuihin asuntoihin.
  • Ahola, Samu-Petteri (2020)
    Pankkien täytyy varautua luottotappiota vastaan pitämällä riittäviä oman pääoman ehtoisia puskureita taseessaan. Basel -säädökset asettavat kehikon, jonka perusteella vähimmäispääomavaateet tulisi määrittää. Aiemmat tutkimukset ovat osoittaneet, että Basel II:n foundation IRB -malli aliarvioi pankeille kohdistettuja vähimmäispääomavaateita Basel-säädösten ensimmäisen pilarin alla lainaportfolion ollessa heikosti hajautettu. Nimikeskittymäriskin sisällyttäminen osaksi IRB-mallia voisi tarkentaa ensimmäisen pilarin alaisia vähimmäispääomavaatimuksia tilanteessa, jossa pankin luottoportfolio ei ole täydellisen hienojakoinen. Tämän tutkielman tavoitteena on selvittää, pystyykö nimikeskittymäriskiä huomioiva lisäys tarkentamaan vähimmäispääomavaadetta pohjoismaisten suuryritysten luotoista koostuvalle luottoportfoliolle, kun tämä lisäys sisällytetään foundation IRB –malliin. Tutkimusaineisto koostui Dealogicilta ladatusta syndikoitujen lainojen aineistosta, joka tässä tutkielmassa simuloi suuryrityksistä johtuvaa luottoriskiä pohjoismaiselle pankille. Koostetusta aineistosta luotiin satunnaisotannalla 41 alaportfoliota. Jokaiselle alaportfoliolle ja vastapuolelle määritettiin foundation IRB –mallin mukainen vähimmäispääoman määrä, nimikeskitymäriskiä huomioivan muokatun foundation IRB -mallin mukainen vähimmäispääoman määrä ja Monte Carlo –simulaatioiden antama vähimmäispääoman määrä, jotta eri mallien antamia vähimmäispääomavaateita voitiin arvioida. Nimikeskittymäriskiä huomioiva muokattu foundation IRB –malli laski vähimmäispääomavaateen tarkemmin verrattuna Basel-säädösten mukaiseen foundation IRB –malliin. Nimikeskittymäriskin sisällyttäminen ensimmäisen pilarin vähimmäispääomalaskelmien alle nopeuttaisi ja yksinkertaistaisi pankkien vaatimusta määrittää oman vähimmäispääomavaateen määrää. Pankkien kannalta positiivisena puolena olisi myös se, että yksittäisen luoton vaatiman oman pääoman lisäys voitaisiin määrittää tarkemmin.
  • Tuhkuri, Joonas (2015)
    There are over 100 billion searches on Google every month. This thesis examines whether Google search queries can be used to predict the present and the near future unemployment rate in the US. Predicting the present and near future is of interest, as the official records of the state of the economy are published with a delay. To assess the information contained in Google search queries, the thesis compares a simple predictive model of unemployment to a model that contains a variable, Google Index, constructed from Google data. In addition, descriptive cross-correlation analysis and Granger non-causality tests are performed. To study the robustness of the results, the thesis considers state-level variation in the unemployment rate and Google Index using a fixed effects model. Furthermore, the sensitivity of the results is studied with regard to different search terms. The results suggest that Google searches contain useful information on the present and the near future unemployment rate. The value of Google data for forecasting purposes, however, tends to be time specific, and the predictive power of Google searches appear to be limited to short-term predictions. The results demonstrate that big data can be utilized to forecast economic indicators.
  • Vu, Wendy (2019)
    Bitcoin and other cryptocurrencies have been frequently on media lately. As these cryptocurrencies are relatively new, there are not much economic theory explaining their behavior and price developments. Due to these reasons, the goal of this thesis is to find an economic theory to study the demand for Bitcoin. In this thesis, I will write about Bitcoin applying it to Walsh’s Money in Utility function (MIU function). I will modify Walsh’s original model by incorporating Bitcoin to it. In this model, Bitcoin is used as payment method and as a store of value. Both Bitcoin and money can be used to buy any goods, but there are certain goods that are easier to buy using bitcoin. Hence, Bitcoin has transaction benefit and the households will always need some bitcoin holdings in their portfolio. Using Walsh’s MIU function, I will derive a demand function for Bitcoin. In addition to this, I will go through the working paper “Bitcoin Pricing, Adoption, and Usage: Theory and Evidence” written by Athey et Al. (August 2016). In this paper, Bitcoin is used both as a payment method and a store of value. From the findings by Athey et Al., Bitcoin seems to be mainly used as a store of value. I will present an overview of the paper including the results and then concentrate on their aggregate analysis on Bitcoin exchange rate. Based on the Bitcoin exchange rate equation presented by Athey et Al., I will study whether Bitcoin demand function derived from MIU model is able to explain the changes in Bitcoin’s aggregate demand in real market. As expected, due to the assumptions and restrictions of the model, Bitcoin demand function derived in this thesis is not able to fully explain the changes in demand for Bitcoin in real world. Nonetheless, subject to the assumptions and restrictions of the model, Bitcoin demand function can be used to study the relationship between bitcoin demand, domestic nominal interest rate and consumption. Finally, I will present an alternative approach to further study Bitcoin’s demand.
  • Wilkman, Maria (2015)
    The aim of this empirical study is to analyse whether announcements by Moody’s, Standard and Poor’s and Fitch Ratings regarding the credit rating of Ukraine and Russia can explain the movements in the yield spreads on their government bonds during 1st January 2010 – 6th February 2015. The motivation for this research question derives from the results of previous empirical studies, which have found that announcements by the three credit rating agencies regarding the sovereign rating of a country impact the country’s borrowing costs. Particularly negative rating news, concerning either a downgrade or the assignment of a negative outlook to the rating, have been found to impact yield spreads, leading to increased borrowing costs for the country. Against this background, this study analyses whether the many negative announcements from the credit rating agencies regarding the Ukrainian and Russian sovereign ratings can explain the large increases in the countries’ government bond yield spreads since 2010. The methodology used in the empirical study is based on regression analysis, which incorporates an event study through the use of dummy variables. The overall findings indicate that announcements from the rating agencies affect the government borrowing costs of the country concerned, as the results show a statistically significant impact of the announcement events on the country’s bond yield spreads. However, as the impact of the events on the yield spreads is considerably smaller in magnitude than the movements in the spreads, the results indicate that factors other than the rating agency announcements are driving the large increases in the Ukrainian and Russian borrowing costs. The conclusion of the study is therefore that although some of the announcements are found to be statistically significant, the rating events alone cannot explain the movements in the yield spreads on the countries’ government bonds during 1st January 2010 – 6th February 2015. Contrary to previous studies, the results show no clear evidence that negative events affect yield spreads to a greater extent than positive events. There is also no considerable difference in the impact on the yield spreads between announcements by the different agencies. In terms of the magnitude of the impact of rating events on yield spreads, the results of this study are largely in line with previous findings in the literature. The analysis of the relationship between credit rating announcements and government bond yield spreads for Ukraine and Russia since 2010 presented in this paper is divided into five chapters, which approach the research question from different perspectives. Chapter one provides the necessary background for the analysis and offers a theoretical explanation for why announcements by the rating agencies may impact the yield spreads on a country’s government bonds. Chapter two presents an overview of the empirical literature on the topic; three previous papers which use event study analysis to investigate the impact of rating agency announcements on government yield spreads are discussed and evaluated. Against this background, chapter three describes the empirical methodology used in this paper to study the relationship between Ukrainian and Russian credit rating announcements and yield spreads; the data set on which the analysis is based is introduced in chapter four. The results of the analysis are discussed in chapter five, followed by concluding remarks.
  • Tschamurov, Viveka (2013)
    A model is been built where two countries compete for a multinational enterprise’s (MNE’s) foreign direct investment (FDI) provided that its arrival will increase the host country’s social welfare. Both potential host countries have unionised labour markets where monopoly labour unions determine the level of the wage setting to be either decentralised, intermediate level, or centralised. The governments may influence the unions’ decision by setting a lump-sum tax on them. Both countries have two sectors, a non-sheltered and sheltered sector. The MNE will enter in the non-sheltered sector and is assumed to be more productive than the incumbent firms there. Product market competition between the MNE and domestic incumbent firms is ruled out to isolate the effect of product market competition from the effect of pure wage compression. The game evolves in five stages: (1) the governments set taxes, (2) the monopoly unions choose the level of the wage setting, (3) the MNE chooses its investment location, (4) the monopoly unions set wages, and (5) the firms set output. The purpose of the model is to learn whether the degree of centralisation of wage setting can be used as a strategic choice to attract foreign direct investment. The main results of the paper are the following. It was found that the MNE’s (incumbent unions’) most preferred choice is always centralised (decentralised) wage setting. It was shown that the governments’ most preferred choice is either decentralised or centralised wage setting – depending on the relative sizes of the two sectors. If the social welfare in country 1 is the highest under decentralised wage setting, then the optimal policy of government 1 is to set zero income taxes. If the social welfare in country 2 is the highest under centralised wage setting, then the optimal policy of government 2 is to set positive taxes slightly over that required to make the domestic incumbent labour unions prefer centralised wage setting. Given this, the MNE will always invest in country 2. The exact expressions for the stage-contingent lump-sum taxes were derived. To my best knowledge, this is a novel contribution that cannot be found elsewhere.
  • Nissinen, Sari (2020)
    Greenhouse gases can be considered as negative externalities that harm the climate. Externalities can be internalized by setting a tax equal to the marginal damage from the externality; a lesson taught by Pigou already a hundred years ago. Carbon pricing has long been seen as the most prominent tool to fight climate change. In recent decades there have been efforts to agree on a global carbon emission reductions. Although international agreements have been implemented, most of those have lacked either effective quantity goals or adopting of global carbon price. The failure of global agreements is ambiguous, but a tendency to trust on countries altruism in agreeing on costly emission reductions has hardly helped. There are already numerous scientists who have suggested that adopting a single global carbon price would be both relatively easily agreed and sufficient as a tool to restrict global carbon emissions. By adopting a mutual price for carbon, the national authorities would be left with implementing the emission reductive policies. As some countries and unions have adopted carbon pricing schemes, knowledge on the effectivity of those instruments are beginning to emerge. Carbon taxes and cap and trade systems are common mechanisms to implement carbon pricing and in optimal settings, the outcomes of the instruments will be identical. Nevertheless, in practise the outcomes of the instruments depend on the regulative authority’s capability in estimating an effective level for the tax or the emission cap. It is suggested that it is easier to reach an international agreement of a price rather than quantity measure. Although a global single carbon price would likely represent a compromise, having a shadow value of carbon can help in knowing price ranges given by modern models that combine climate science with economical approaches. With carbon taxes the national authorities gain tax revenue, which is a specific benefit with taxes over emission caps. This revenue recycling makes interactions of carbon taxes and specifically labor taxes especially important to investigate. This study goes through the backgrounds of carbon pricing in the perspectives of international agreements, efficiency, modern integrated assessment models and their estimates for the social cost of carbon. Further, we will go through a simple model which shows the interactions of environmental taxation and the economy. The guiding light of the study is especially Weitzman’s (2014) idea of the single carbon price. The study aims on combining the knowledge on carbon pricing and pointing to the urgency for emission mitigative actions to be made.
  • Nurmi, Aleksi (2016)
    The aim of this study is to understand the fundamental features of China’s economic transition since 1992. In order to do so, the central features of China’s transition are reviewed, most notably the main economic reforms, the firm-level resource reallocation, productivity differences between state-owned and private enterprises, moderate wage growth and rising income inequality, financial market imperfections and the central macroeconomic indicators: accumulation of foreign surplus and high aggregate investment and savings rates. A growth model consistent with China’s growth experience is built to give a clear qualitative explanation to China’s puzzling phenomena: Why does a country accumulate a foreign surplus despite of high domestic rate of return to capital? Why does a country’s rate of return to capital remain high in spite of a high investment rate? The cornerstones of the model are heterogeneity in productivity, reallocation of resources and asymmetric financial imperfections. The enterprise sector is divided into private and state-owned enterprises. Private enterprises are more productive, but due to the discrimination by the financial sector they must rely on internal savings, while state-owned enterprises are less productive, but survive in equilibrium due to better access to external financing. If the entrepreneurial savings are large enough, private enterprises gradually outgrow state-owned enterprises. Financial integration of state-owned firms and labor mobility sustains the rate of return for both types of firms during the transition. Moreover, the aggregate rate of return to capital increases due to the composition effect. The accumulation of foreign surplus originates from the financial imperfections. The wage earners deposit their savings to the banks, which in turn, can either invest to domestic enterprises or in foreign bonds. As the transition progresses the volume of high-productive financially constrained enterprises increase while the volume of low-productive externally financed enterprises decrease. Hence as the volume of state-owned enterprises decrease, a higher amount of domestic savings is invested into foreign assets by the financial intermediaries causing the foreign surplus to increase. After the transition is over, the economy is dominated by private enterprises and capital accumulation is subject to diminishing return to capital. The main contradictions with China’s experience are frictionless labor market, financial market laissez-faire environment and the prediction that state-owned enterprises fully fades from the economy. Despite of these simplifications, the model gives a clear qualitative explanation to China’s puzzling phenomena of sustained return to capital and growing foreign surplus. The simplifications allow the model to focus on the main differences between E and F firms, that is to say the heterogeneity in productivity and asymmetric financial imperfections.
  • Koponen, Kristine (2013)
    The cross-country co-movements of economic variables have been documented in the macroeconomic research. This phenomenon has puzzled researchers in the field of dynamic stochastic general equilibrium (DSGE) models because the early DSGE models have had challenges in replicating the co-movements of outputs. The thesis approaches the cross-country co-movements of output cycles in DSGE models by introducing correlation to technology shocks. The objective is to study if the correlation in the technology shocks enhances the model's ability to capture the cross-country correlations of empirical data. The thesis presents a two-country DSGE model that is constructed using the results of Galí and Monacelli (2005). The original model of Galí and Monacelli is a small-country model, and in the thesis it is demonstrated how the model is re-constructed as consisting of two large economic regions. Another important modification to the original model is that the thesis presents a distinctive shock process that allows the technology shocks to correlate. This is done by adding a foreign technology shock variable to the domestic technology shock process. The final model is presented as a system of thirteen equations and, as a solution to the system, the dynamics of the model are observed. The results from the model show that the two-country model with correlated shock processes is able to replicate the cross-country correlations of empirical data well. This result is compared to the benchmark model with no shock correlations and the comparison reveals that although the benchmark model succeeds in replicating the cross-country correlations between inflations and nominal interest rates, it does not produce as high output gap correlation as the model with correlated shock processes. The difference between these models is caused by the distinctive shock processes. The technology shocks affect directly the potential output and the real output adjusts slowly as a response to the changes in the expectations. This causes the dynamics in the output cycle. The results of the thesis show evidence that introducing correlation between country-specific technology shocks can enhance the models ability to produce realistic cross-country output co-movements. This result should apply to other models that follow the framework of Galí and Monacelli. The generalization of the results could still be studied further. In addition, including new features to the model would allow for examination of wider variety of shocks.
  • Norring, Anni (2015)
    In this thesis I consider studying the determinants of international investments with the gravity model of international financial asset trade. I discuss the relevant literature and present a theoretical framework for gravity in cross-border investments. I compare three empirical approaches, the classic approach that studies the determinants of the observed levels of cross-border holdings by a fixed effects panel model, the dichotomous approach that studies the effects of determinants on the probability of there being a positive cross-border investment by a probit model and finally an approach which combines the two previous ones by a double-hurdle model. I propose that the double-hurdle model is the correct approach in the context of cross-border investments.