Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Subject "ekonometria"

Sort by: Order: Results:

  • Beniard, Henry (2010)
    This thesis researches empirically, whether variables that are able to reliably predict Finnish economic activity can be found. The aim of this thesis is to find and combine several variables with predictive ability into a composite leading indicator of the Finnish economy. The target variable it attempts to predict, and thus the measure of the business cycle used, is Finnish industrial production growth. Different economic theories suggest several potential predictor variables in categories, such as consumption data, data on orders in industry, survey data, interest rates and stock price indices. Reviewing a large amount of empirical literature on economic forecasting, it is found that particularly interest rate spreads, such as the term spread on government bonds, have been useful predictors of future economic growth. However, the literature surveyed suggests that the variables found to be good predictors seem to differ depending on the economy being forecast, the model used and the forecast horizon. Based on the literature reviewed, a pool of over a hundred candidate variables is gathered. A procedure, involving both in-sample and pseudo out-of-sample forecast methods, is then developed to find the variables with the best predictive ability from this set. This procedure yields a composite leading indicator of the Finnish economy comprising of seven component series. These series are very much in line with the types of variables found useful in previous empirical research. When using the developed composite leading indicator to forecast in a sample from 2007 to 2009, a time span including the latest recession, its forecasting ability is far poorer. The same occurs when forecasting a real-time data set. It would seem, however, that individual very large forecast errors are the main reason for the poor performance of the composite leading indicator in these forecast exercises. The findings in this thesis suggest several developments to the methods adopted in order to produce more accurate forecasts. Other intriguing topics for further research are also explored.
  • Mytty, Tuukka (2013)
    Does carbon dioxide predict temperature? No it does not, in the time period of 1880-2004 with the carbon dioxide and temperature data used in this thesis. According to the Inter Governmental Panel on Climate Change(IPCC) carbon dioxide is the most important factor in raising the global temperature. Therefore, it is reasonable to assume that carbon dioxide truly predicts temperature. Because this paper uses observational data it has to be kept in mind that no causality interpretation can be made, only predictive inferences. The data is from the years 1880-2004 and consists of carbon dioxide emissions and temperature anomalies, the base period for the anomalies is 1961-1990. The main analysis method is the cointegrated VAR model but also the standard VAR model is used. The variables were tested for possible unit roots and it was found that there were unit roots present. Then the variables were tested for the cointegrating rank and here the analysis divided into three parts. One, with the assumptions that the variables are integrated of order one, a constant as a deterministic term and one cointegrating relation. Two, variables are allowed to be integrated of order two, a linear trend as a deterministic term and one cointegrating relation. Three, based on some weak evidence there was a result that variables weren’t cointegrated and the analysis could be done in differences. In the first the case it the result was that carbon dioxide doesn’t predict temperature but actually temperature predicted carbon dioxide, the second version gave the same result. In the third case neither one of the variables predicted the other one. These results go against the what is considered as the common consensus in the subject matter of climate change.
  • Fornaro, Paolo (2011)
    In recent years, thanks to developments in information technology, large-dimensional datasets have been increasingly available. Researchers now have access to thousands of economic series and the information contained in them can be used to create accurate forecasts and to test economic theories. To exploit this large amount of information, researchers and policymakers need an appropriate econometric model.Usual time series models, vector autoregression for example, cannot incorporate more than a few variables. There are two ways to solve this problem: use variable selection procedures or gather the information contained in the series to create an index model. This thesis focuses on one of the most widespread index model, the dynamic factor model (the theory behind this model, based on previous literature, is the core of the first part of this study), and its use in forecasting Finnish macroeconomic indicators (which is the focus of the second part of the thesis). In particular, I forecast economic activity indicators (e.g. GDP) and price indicators (e.g. consumer price index), from 3 large Finnish datasets. The first dataset contains a large series of aggregated data obtained from the Statistics Finland database. The second dataset is composed by economic indicators from Bank of Finland. The last dataset is formed by disaggregated data from Statistic Finland, which I call micro dataset. The forecasts are computed following a two steps procedure: in the first step I estimate a set of common factors from the original dataset. The second step consists in formulating forecasting equations including the factors extracted previously. The predictions are evaluated using relative mean squared forecast error, where the benchmark model is a univariate autoregressive model. The results are dataset-dependent. The forecasts based on factor models are very accurate for the first dataset (the Statistics Finland one), while they are considerably worse for the Bank of Finland dataset. The forecasts derived from the micro dataset are still good, but less accurate than the ones obtained in the first case. This work leads to multiple research developments. The results here obtained can be replicated for longer datasets. The non-aggregated data can be represented in an even more disaggregated form (firm level). Finally, the use of the micro data, one of the major contributions of this thesis, can be useful in the imputation of missing values and the creation of flash estimates of macroeconomic indicator (nowcasting).
  • Laakso, Tomi (2022)
    Flash crashes are one of the most prominent market inefficiencies which are recognized in stock and index prices. They violate the hypothesis of efficient markets and affect the real economy as well. Their proper forecasting has not been possible with conventional methods due to their seeming rarity and extremity. Furthermore, they are difficult to detect from the noise of price processes. By augmenting the HAR model with company-specific news data aim is to improve volatility estimates on days when these extreme events occur. These days are first identified from the price processes by a novel statistical method called V-statistic, which detects statistically significant flash crashes. Data is every fulfilled trade for six stocks from New York Stock Exchange, and company-specific news flow data which is obtained from RavenPack News Analytics service. Both data sets cover the years 2014 to 2016. HAR model is estimated for all six stocks with and without the news data and the in-sample-model estimates are compared both on the full data and on the days when flash crashes occur. Results are ambiguous but they give slight signs that news data could be useful for improving volatility estimates on the days when flash crashes occur. Model volatility estimates are better on average when augmented with news data. Mean absolute percentage error is 0.1% smaller on average across all entities when augmenting the model with news data. However, there are differences across companies on how much and if at all news data improves models' performance. In conclusion, further work is needed to verify the usefulness of news data in forecasting flash crashes.
  • Laakso, Tomi (2022)
    Flash crashes are one of the most prominent market inefficiencies which are recognized in stock and index prices. They violate the hypothesis of efficient markets and affect the real economy as well. Their proper forecasting has not been possible with conventional methods due to their seeming rarity and extremity. Furthermore, they are difficult to detect from the noise of price processes. By augmenting the HAR model with company-specific news data aim is to improve volatility estimates on days when these extreme events occur. These days are first identified from the price processes by a novel statistical method called V-statistic, which detects statistically significant flash crashes. Data is every fulfilled trade for six stocks from New York Stock Exchange, and company-specific news flow data which is obtained from RavenPack News Analytics service. Both data sets cover the years 2014 to 2016. HAR model is estimated for all six stocks with and without the news data and the in-sample-model estimates are compared both on the full data and on the days when flash crashes occur. Results are ambiguous but they give slight signs that news data could be useful for improving volatility estimates on the days when flash crashes occur. Model volatility estimates are better on average when augmented with news data. Mean absolute percentage error is 0.1% smaller on average across all entities when augmenting the model with news data. However, there are differences across companies on how much and if at all news data improves models' performance. In conclusion, further work is needed to verify the usefulness of news data in forecasting flash crashes.
  • Long, Feiran (2012)
    Every since Harry Markowitz published his remarkable piece on portfolio diversification in the 50s which then evolved into Modern Portfolio Theory (MPT), the trade-off between return which is commonly measured by expected return, and risk which is commonly measured by expected standard deviation, has been at the heart of investors’ decision making process. Over time the simplicity of this approach has proven to be powerful enough to outweigh its long list of theoretical shortcomings listed in the paper and its popularity with both academics and practitioners has remained intack. The aim of this paper is to present an alternative way of measuring risk when the underling investment instrument is modeled as a semimartingale process. This alternative measurement called Quadratic Variation we argue offer better insight into the riskiness of a stochastic process when the assumptions like normal distribution is no longer satisfied. Also we think given the advance computing power that is accessible to professional investors nowadays, estimating quadratic variation can be simple enough that it offers an appealing alternative to standard deviation in practical field as well. In order to better illustrate the difference between Quadratic variation and Standard deviation in Portfolio optimization study, we formed an investment portfolio with two instruments: Nokia equity which represents equity, and German Bunds Futures which represent fixed income market. We then performed two mean-variance optimization exercises, one using standard deviation as the risk measurement and the other one using the estimate of quadratic variation as the risk measurement. On the result of risk estimation per se, we also used different ways of estimating quadratic variation taken into account the issue of market microstructure. The result showed that using quadratic variation the optimal portfolio contains roughly 10% of holding in Nokia stock while in the traditional mean-variance framework the corresponding figure would have be around 13%.
  • Antin, Ling (2015)
    In this thesis I want to use Monte Carlo simulation to get profits data of different power plants in America to help investors to make investment decisions. Because now electricity market already has changed into a new period which can be called liberalization. During liberalization electricity market investors must face investment risks themselves. Investors cannot give investment risks to consumers any more (Ropues, Nuttall & Newbery, 2006, p.3), which they can do before liberalization. So it is necessary to find a new way that can analyze variable risks properly. I choose nuclear power plant, natural gas power plant and coal power plant. First step I use sensitivity analysis to find out the biggest factors which can influence the values of the NPV of the three power plants. Second step I get the three power plants’ distributions of NPV. Third step I get the distributions of PI of the three power plants. During the Monte Carlo simulations of NPV analysis I make a conclusion that in the case of 10% discount rate natural gas power plant is the best choice and at 5% discount rate it is unclear natural gas or nuclear is the best choice. During Monte Carlo simulations of PI analysis I find that at 5% discount rate it is unclear which of the three power plants is the best choice and at 10% discount rate it is unclear coal or natural gas is best choice. Then I make portfolios analysis. The last part I analyze risk factors of investing in coal, natural gas and nuclear power plants. So the investors who want to invest in American power plants get a very good help from my analysis. They can do the final investment decision according to their own risk appetite on the base of my analysis.
  • Suihkonen, Lauri (2009)
    Finnish round wood industry is reliable on Finnish nonindustrial private forest owners (NIPF) wood sales. More than half of the raw material that Finnish round wood industry uses comes from NIPF’s. Therefore, it is important for the Finnish round wood industry and for the whole economy to know the issues that have an effect on NIPF’s wood supply. This paper examines the supply of round wood in Finland using the theoretical approach of Fisherian consumption-saving model. This research examines the price elasticity of wood supply in Finland at regional level. To examine the regional markets Finland is divided to six price areas. The monthly price- and quantity data from year 1987 to 2007 is gathered from the Finnish forest research institute (METLA). This paper examines standing sale supply and delivery sale supply separately. The results show that usually price elasticity of wood supply is positive in both short-run and in long-run. The expected price variable’s effect on wood supply is negative. The results indicate that estimated short-run elasticities of supply are much greater than in earlier studies. This is because this research uses monthly data where as earlier studies have used quarterly or annual data. The estimated long-run elasticities of supply witch describe the reactions to economic trend are in the same magnitude with earlier studies. There were remarkable differences between standing sale models and delivery sale models. In the short-run delivery sale models price elasticities of supply were much smaller than in standing sale models. In the long-run the results were opposite. The results also show that there are remarkable differences between the supplies of round wood on different price areas. This result strengthens earlier research results on regional market differences in Finnish pulpwood supply.