Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Subject "modelling"

Sort by: Order: Results:

  • Lehtinen, Julius (2022)
    Sortition – selecting representatives by drawing lots – has played a significant part in the history and development of democracy. When it comes to modern more representative variants of democracy, the custom of sortition has, however, fallen from grace and largely vanished from the main stage of democracy. But how well would it work in modern parliaments, compared to current practice of electing representatives? The thesis undertakes a task to simulate the main functions of a plural parliament in the case where there are no randomised representatives present, and in the case where randomised representatives are present – to different extents – and ultimately contrast them to solve this research question. The thesis takes on the research question with the method of constructing a rather simplistic model of a plural parliament and its functions, generating a metric of the hypothetical quality and volume of the legislation produced by the parliament. The model is run and re-run thousands of times as a so-called Mont Carlo simulation with few randomised and many fixed variables to produce overall results dataset of the simulations with different fractions of randomised legislators present in the parliament. The results dataset is ultimately subjected to an analysis of variance (ANOVA) to determine the likelihood of the different fractions of independent legislators producing legislation of different quantity and quality on average. The result of the conducted ANOVA is that the produced quantity and quality of legislation produced by different fractions of independent legislators very probably is not equal on average. Therefore, the quality and quantity of legislation seems to depend on the fraction of randomised legislators in a plural parliament. The quality and quantity of legislation is, further, higher on average in plural parliaments with a moderate number of randomised legislators than it is in a plural parliament where randomised legislators are not present. The thesis continues to conclude that – as the quality and quantity is on average higher with a moderate number of randomised legislators and as the true quality and quantity of the legislation is very probably not equal across the simulations – the quality and quantity of the legislation is higher with a moderate amount of sortition, i.e., randomised legislators present in a plural parliament. The thesis goes on to briefly discuss the ways the conditions of the model could be enabled in real-life and the best ways to achieve the results that the model points towards.
  • Lehtomaa, Jere (2017)
    The incomplete global coverage of current emissions trading schemes has raised concerns about free-riding and carbon leakage. EU ETS, the first and currently the biggest carbon market, is at the fore of such fears. Carbon-based import tariffs have thereby been proposed to compensate domestic industries for the cost disadvantage against their rivals in non-regulating countries. This thesis uses an applied general equilibrium (AGE) model to assess the impacts of a hypothetical EU carbon tariff on the Finnish economy. The carbon content of imported goods is first estimated with an environmentally extended input-output analysis, and the tariff is levied according to the anticipated price of EU emission allowances. To examine the sensitivity of the results, five additional scenarios are then constructed by altering the key simulation parameters. The tariff is imposed on the most energy-intensive and trade-exposed industries in 2016 and simulated until 2030. The results suggest that carbon tariffs are detrimental to the Finnish economy. The negative outcome is determined by high material intensity and a growing dependence on imported materials throughout the industry sector. As a result, the tariff-induced increase in import prices adds up to a notable growth in total production costs. Moreover, the negative impact is most pronounced within the export-oriented heavy manufacturing sector that the tariff was designed to shelter in the first place. The few sectors that gain from the tariff were not directly subject to it, but utilize the secondary impacts as the economy adapts to the shock. The findings imply that due to the deeper integration of global value chains, the appeal of protective tariffs, even if environmentally motivated, can be harmfully over-simplistic.
  • Grazhdankin, Evgeni (2018)
    We have developed a software for homology modelling by satisfaction of distance restraints using MODELLER back-end. The protocols used extend exploration of distance restraints and conformational space. We drive the models in optimization cycle towards better structures as assessed by the used metrics on DOPE score, retrospective distance restraint realization and others. Hydrogen bond networks are optimized for their size and connectivity density. The performance of the method is evaluated for its ability to reconstruct GPCR structures and an extracellular loop 2. The software is written in object-oriented Python (v.2.7) and supports easy extension with additional modules. We built a relational PostgreSQL database for the restraints to allow for data-driven machine and deep learning applications. An important part of the work was the visualization of the distance restraints with custom PyMOL scripts for three-dimensional viewing. Additionally, we automatically generate a plethora of diagnostic plots for assessing the performance of the modelling protocols. The software utilizes parallelism and is computationally practical with compute requirements on an order of magnitude lower than those typically seen in molecular dynamics simulations. The main challenges left to be solved is the evaluation of restraint goodness, assigning secondary structures, restraint interconditioning, and water and ligand placement.
  • Ylä-Mella, Lotta (2020)
    Terrestrial cosmogenic nuclides can be used to date glacial events. The nuclides are formed when cosmic rays interact with atoms in rocks. When the surface is exposed to the rays, the number of produced nuclides increases. Shielding, like glaciation, can prevent production. Nuclide concentration decreases with depth because the bedrock attenuates the rays. The northern hemisphere has experienced several glaciations, but typically only the latest one can be directly observed. The aim of the study was to determine if these nuclides, produced by cosmic rays, can be used to detect glaciations before the previous one by using a forward and an inverse model. The forward model predicted the nuclide concentration with depth based on a glacial history. The longer the exposure duration was, the higher was the number of nuclides in the rock. In the model, it was possible to use three isotopes. Be-10, C-14 and Al-26. The forward model was used to produce synthetic samples, which were then used in the inverse model. The purpose of the inverse model was to test which kind of glacial histories produce similar nuclide concentrations than what the sample had. The inverse model produced a concentration curve which was compared with the concentration of the samples. The misfit of the inverse solution was defined with an “acceptance box”. The box was formed from the thickness of the sample and the corresponding concentrations. If the curve intersected with the box, the solution was accepted. Small misfit values were gained if the curve was close to the sample. The idea was to find concentration curves which have as similar values as the samples. The inverse model was used in several situations, where the number of limitations was varied. If the timing of the last deglaciation and amount of erosion were known, the second last deglaciation was found relatively well. With looser constraints, it was nearly impossible to detect the past glaciations unless a depth profile was used in the sampling. The depth profile provided a tool to estimate the amount of erosion and the total exposure duration using only one isotope.
  • Tanhuanpää, Topi (2011)
    There is an ever growing interest in coarse woody debris (CWD). This is because of its role in maintaining biodiversity and storing atmospheric carbon. The aim of this study was to create an ALS-data utilizing model for mapping CWD and estimating its volume. The effect of grid cell size change to the model's performance was also considered. The study area is located in Sonkajärvi in eastern Finland and it consisted mostly of young commercially managed forests. The study utilized low-frequency ALS-data and precise strip-wise field inventory of CWD. The data was divided into two parts: one fourth of the data was used for modeling and the remaining three fourths for validating the models that were constructed. Both parametric and non-parametric modelling practices were used for modelling the area's CWD. Logistic regression was used to predict the probability of encountering CWD in grid cells of different sizes (0.04, 0.20, 0.32, 0.52 and 1.00 ha). The explanatory variables were chosen among 80 ALS-based variables and their conversions in three stages. Firstly, the variables were plotted against CWD volumes. Secondly, the best variables plotted in the first stage were examined in single-variable variable models. Thirdly, variables to the final multivariable model were chosen using 95 % level of significance. The 0.20 ha model was parametrized to other grid cell sizes. In addition to parametric model constructed with logistic regression, 0.04 ha and 1.0 ha grid cells were also classified with CART-modelling (Classification and Regression Trees). With CARTmodelling, non-linear dependecies were sought between ALS-variables and CWD. CART-models were constructed for both CWD existence and volume. When the existence of CWD in the study grid cells was considered, CART-modelling resulted in better classification than logistic regression. With logistic model the goodness of classification was improved as the grid cell size grew from 0.04 ha (kappa 0.19) to 0.32 ha (kappa 0.38). On 0.52 ha cell size, kappa value of the classification started to diminish (kappa 0.32) and was futhermore diminished to 1.0 ha cell size (kappa 0.26). The CART classification improved as the cell size grew larger. The results of CART-modelling were better than those of the logistic model in both 0.04 ha (kappa 0.24) and 1.0 ha (kappa 0.52) cell sizes. The relative RMSE of the cellwise CWD volume predicted with CART-models diminished as the cell size was enlarged. On 0.04 ha grid cell size the RMSE of the total CWD volume of the study area was 197.1 % and it diminished to 120.3 % as the grid cell size was enlarged to 1.0 ha. On the grounds of the results of this study it can be stated that the link between CWD and ALS-variables is weak but becomes slightly stronger when cell size increases. However, when cell size increases, small-scale variation of CWD becomes more difficult to spot. In this study, the existence of CWD could be estimated somewhat accurately, but the mapping of small-scale patterns was not successful with the methods that were used. Accurate locating of small-scale CWD variation requires further research, particularly on the use of high density ALS-data in CWD inventories.
  • Kallio, Varpu (2014)
    The purpose of this study is to evaluate patients' quality of life and healthcare use before and after bariatric surgery and produce new, clinical data-based information on the cost-effectiveness of bariatric surgery. Healthcare resources are limited and expenditures have grown from year to year. Therefore it is important to make cost-effectiveness evaluations so that financial resources could be allocated properly. The research population consists of patients who have undergone gastric bypass or sleeve gastrectomy in the Hospital District of Helsinki and Uusimaa, during the years 2007-2009. The study population consists of 147 gastric bypass patients and 79 sleeve gastrectomy patients. In this study the decision analytic model, used in the Finohta study "Sairaalloisen lihavuuden leikkaushoito" was updated using actual, up-to-date information. The analysis was done using a decision tree and a Markov model with a time horizon of 10 years. The cost data in this study was based on actual data for the first two years after surgery. A forecast model was used to predict the costs for the years 3-10 after surgery. Patients' quality of life scores were based on real data for the years 1 (the year of operation) to 4. Quality of life scores for the other years were predicted. In the literature review section, international studies on the cost-effectiveness of bariatric surgery and its impacts on drug therapy were evaluated. The studies showed that the use of medicines, which were used to treat obesity-related diseases were lower in the surgery group. However, drugs used to treat vitamin deficiencies, depression and gastrointestinal diseases were higher in the surgery group. Most studies found that surgery is the most cost-effective way to treat morbid obesity. This study confirms the role of the bariatric surgery in the treatment of morbid obesity in Finland. Even though the healthcare costs were increased in the first two years after the operation, the conclusions of the Finohta study didn't change. The bariatric surgery is cheaper and more effective than ordinary treatment and the most cost-effective way to treat morbid obesity. The mean costs were 30 309 € for the gastric bypass, 31 838 € for the sleeve gastectomy and 36 482 € for ordinary treatment. The mean numbers of quality-adjusted life-years were 6.919 for the gastric bypass, 6.920 for the sleeve gastrectomy and 6.661 for ordinary treatment. However, there is demand for more information for the long-term effects, benefits and risks of the surgery. How much the surgery will actually save money, will be hopefully clarified in the long-term follow-up study, which should also include an actual control group.
  • Laakkonen, Antti (2020)
    Understanding soil respiration behaviour in different environments is one of the most crucial research questions currently in environmental sciences, since it is a major component of the carbon cycle. It can be divided into many source components, them being litter decomposition, soil organic matter, root respiration and respiration in the rhizosphere. Many biotic and abiotic factors control soil respiration through complicated relationship networks. Strong controlling factors being soil temperature, soil moisture, substrate supply and quality, soil nitrogen content, soil acidity and soil texture. As these relationships are biome-specific, they must be understood in order to produce more accurate assessments worldwide. In this study annual soil respiration rates and its controlling factors were investigated globally in unmanaged and natural mature forest biomes. Observed values were extracted from Soil respiration database (SRDB) v.5, and it was complemented with spatially and temporally linked data from remotely sensed and modelled databases to produce variables for forest productivity, meteorological conditions and soil properties. Furthermore, empirical soil respiration models and machine learning algorithms, as well as previous estimates, were compared to each other. Locally, monthly manual soil respiration measurements from boreal forest site in Hyytiälä, Finland from the years 2010-2011, with environmental, soil temperature and soil water conditions were investigated to identify seasonal differences in controlling factors of soil respiration rate. Soil respiration controls were found to differ between biomes. Furthermore, the Artificial Neural Network algorithm used was observed to outperform empirical models and previous estimates, when biome specific modelling was implemented with the continental division. Artificial neural networks and other algorithms could produce more accurate estimates globally. Locally soil respiration rates were observed to differ seasonally, with soil temperature controls being stronger during the growing season and when snow depth exceeded 30 cm, soil water conditions, controlled soil respiration strongly.
  • Kukkonen, Tommi (2020)
    The Arctic is warming with an increased pace, and it can affect ecosystems, infrastructure and communities. By studying periglacial landforms and processes, and using improved methods, more knowledge on these changing environmental conditions and their impacts can be obtained. The aim of this thesis is to map studied landforms and predict their probability of occurrence in the circumpolar region utilizing different modelling methods. Periglacial environments occur in high latitudes and other cold regions. These environments host permafrost, which is frozen ground and responds effectively to climate warming, and underlays areas that host many landform types. Therefore, landform monitoring and modelling in permafrost regions under changing climate can provide information about the ongoing changes in the Arctic and landform distributions. Here four landform/process types were mapped and studied: patterned ground, pingos, thermokarst activity and solifluction. The study consisted of 10 study areas across the circumpolar Arctic that were mapped for their landforms. The study utilized GLM, GAM and GBM analyses in determining landform occurrences in the Arctic based on environmental variables. Model calibration utilized logit link function, and evaluation explained the deviance value. Data was sampled to evaluation and calibration sets to assess prediction abilities. The predictive accuracy of the models was assessed using ROC/AUC values. Thermokarst activity proved to be most abundant in studied areas, whereas solifluction activity was most scarce. Pingos were discovered evenly throughout studied areas, and patterned ground activity was absent in some areas but rich in others. Climate variables and mean annual ground temperature had the biggest influence in explaining landform occurrence throughout the circumpolar region. GBM proved to be the most accurate and had the best predictive performance. The results show that mapping and modelling in mesoscale is possible, and in the future, similar studies could be utilized in monitoring efforts regarding global change and in studying environmental and periglacial landform/process interactions.
  • Airola, Sofia (2014)
    The subject of this thesis was to evaluate the capability of the FEMMA model in simulating the daily nitrogen load from a forested catchment. For that FEMMA was tested in a forest plot in Hyytiälä, Juupajoki. The modeling results of the concentrations of ammonium, nitrate and dissolved organic nitrogen in the runoff water were compared to the measured values of those. This work presents the current state of knowledge concerning the most significant nitrogen processes in forest soil, as reported in the literature. It also lists some alternative models for simulating nitrogen and evaluates the uncertainties in the modelling critically. As a result FEMMA was found not to be suitable for simulating daily nitrogen load from this catchment. The simulated results didn’t correspond to the measured values. The most significant factors to develop in FEMMA found in this study were the parametrization of the gaseous nitrogen losses from the system, re-examining the nitrogen uptake by plants and developing the computing of the fractions of nitrogen released in decomposition. For future research it would be important to decide if it is meaningful to simulate the daily nitrogen leaching with process-based models at all. At least in the Hyytiälä site the amount of leached nitrogen is so small compared to the nitrogen in other processes that it’s quite challenging to simulate it accurately enough.
  • Lehtiniemi, Heidi (2020)
    Computing complex phenomena into models providing information of the causalities and future scenarios is a very topical way to present scientific information. Many claim models to be the best available tool to provide decision making with information about near-future scenarios and the action needed (Meah, 2019; Schirpke et al., 2020). This thesis studies global climate models based on objective data compared to local ecosystem services models combining ecological and societal data offer an extensive overview of modern environmental modelling. In addition to modelling, the science-policy boundary is important when analyzing the societal usefulness of models. Useful and societally-relevant modelling is analyzed with an integrative literature review (Whittemore & Knafl, 2005) on the topics of climate change, ecosystem services, modelling and science-policy boundary, n=58. Literature from various disciplines and viewpoints is included in the material. Since the aim is to create a comprehensive understanding of the multidisciplinary phenomenon of modelling, the focus is not on the technical aspects of it. Based on the literature, types of uncertainty in models and strategies to manage them are identified (e.g. van der Sluijs, 2005). Characteristics of useful models and other forms of scientific information are recognized (e.g. Saltelli et al., 2020). Usefulness can be achieved when models are fit for purpose, accessible and solution-oriented, and sufficient interaction and trust is established between the model users and developers. Climate change and ecosystem services are analyzed as case studies throughout the thesis. The relationship of science and policy is an important discussion especially important when solving the sustainability crisis. Because modelling is a boundary object (Duncan et al., 2020), the role of boundary work in managing and communicating the uncertainties and ensuring the usefulness of models is at the center of the analysis.
  • Honkanen, Henri (2022)
    Remote sensing brings new potential to complement environmental sampling and measuring traditionally conducted in the field. Satellite images can bring spatial coverages and accurately repeated time-series data collection to a whole new level. While developing methos for doing ecological assessment from space in situ sampling is still in key role. Satellite images of relatively coarser pixel size where individual plants or trees are not possible to separate usually utilize vegetation indices as proxies for environmental qualities and measures. One of the most extensively used and studied vegetation index is Natural Difference Vegetation Index (NDVI). It is calculated as normalized ratio between red light and near-infra-red radiation with formula: NDVI=NIR- RED/NIR+RED. Index functions as a measure for plant productivity, that has also been linked to species-level diversity. In this thesis MODIS NDVI (MOD13Q1, 250 m x 250 m resolution) and selected additional variables were examined through their predictive power for explaining variation in tree species richness in six different types of moist tropical evergreen forests in the province of West Kalimantan, on the island Borneo in Indonesia. Simple and multiple regression models were built and tested with main focus on 20- year mean-NDVI. Additional variables used were aboveground carbon, elevation stem count, tree height and DBH. Additional variables were examined initially on individual basis and subsequently potential variables were then combined with NDVI. Results indicate statistically significant, but not very strong predictable power for NDVI (R2=0.25, p-value=2.11e-07). Elevation and number of stems outperformed NDVI in regression analyses (R2=0.64, p-value=2.2e-16 and R2=0.36, p-value=4.5e-11, respectively). Aboveground biomass carbon explained 19% of the variation in tree species richness (p-value=6.136e-06) and thus was the worst predictor selected for multiple regression models. Tree height (R2=0.062, p-value=0.0137) and DBH (R2=0.003, p-value=0.6101) did not show any potential in predicting tree species richness. Best variable combination was NDVI, elevation and stem count (R2=0.71, p-value=2.2e-16). Second best was NDVI, elevation and aboveground biomass carbon (R2=0.642, p-value=2.2e-16), which did not promote for biomass carbon as a potential predictor as model including only NDVI and elevation resulted nearly identically (R2=0.639, p-value=2.2e-16). Model including NDVI and stem count explained 54% of the variation in tree species richness (p-value=2.2e-16) suggesting elevation and stem count being potential variables combined with NDVI for this type of analysis. Problems with MODIS NDVI are mostly linked to the relatively coarse spectral scale which seems to be too coarse for predicting tree species richness. Spectral scale also caused spatial mismatch with field plots as being significantly of different sizes. Applicability in other areas is also limited due to the narrow ecosystem spectrum covered as only tropical evergreen forests were included in this study. For future research higher resolution satellite data is a relevant update. In terms methodology, alternative approach known as Spectral Variability Hypothesis (SVH), which takes into account heterogeneity in spectral reflectance, seems more appropriate method for relating spectral signals to tree species richness.
  • Nystedt, Ari (2019)
    The modern, intensive silviculture has affected negatively to the grouses. Main reasons are changes in the ground vegetation and decreasing proportion of blueberry. Main features for grouse habitats are variety in the forest cover and protection from the understorey. In managed forests fluctuation can be increased via thickets. Thicket size varies from couple of trees to approximately two ares. Tickets are uncleared patches containing trees in various sizes. To highlight grouses via game-friendly forest management, information about the habitat is required in the forest site and broader area. Observations about the grouses in the forest site and the information about capercaillie’s lekking sites, willow grouse’s habitats and the wintering areas have been beneficial. Information about grouse densities and population’s fluctuations has been gathered via game triangles. Guide books about game husbandry contain information about grouse habitats and thicket characteristics. The aim of this study was to investigate, whether it is possible to model suitable thickets and grouse habitats with open GIS (Geographical Information Systems) material via GIS- analyses. Examined grouse species in modelling were capercaillie, black grouse and hazel grouse. Weighted Overlay was done with ArcMap- software. Suitable thickets and habitats were examined in the whole research area and in suitable figures. Based on the results of the analysis, theme maps were made to represent the research area’s suitability for thickets and grouse habitats. The needed material for the thickets was collected and GIS- analyses were made in the research area in Tavastia Proper, Hausjärvi. For the research, 12 one-hectare squares were created. Together 45 suitable areas for thickets were charted via field inventory. After the field inventory and GIS- analyses, the results were compared. Key figures from the tickets were number of the thickets, areas, distance to the nearest thicket, averages and standard deviations. Statistical methods were applied to examine possible statistically significant differences between areas and between distances to the nearest thicket. Performed tests were One-Way ANOVA and Kruskall-Wallis. Grouse habitat’s tree characteristics were examined with up-to-date forest management plan. Tree characteristics were examined from 17 suitable figures, covering total area of 42,6 hectares. In field inventory, the average amount of found thickets in research grid was 3,8 and with modelling 1,4. The average area of thicket was 76,9 m2 in field inventory and 252 m2 in modelling. The average distance between thickets was 12,6 meters in field inventory and 24,8 meters with modelling. In field inventory thickets covered approximately 2,9 percent and modelled 3,6 percent of the research grid’s total area. According to statistical analyses, there was statistically significant difference between the inventory method to the total thicket area and distance to the nearest thicket. According to the modelling and forest management plan, capercaillie’s habitats were located in mature pine stands. Black grouse habitats were located in spruce dominated, young forest stands. Hazel grouse habitats included high proportion of broad-leaved trees, which were visible in ecotones between forest and field. Common for capercaillie, black grouse and hazel grouse habitats were minor surface area and mosaic-like structure. As a result, thickets and grouse habitats can be modeled with open GIS-material. However, modelling requires knowing the characteristics of thickets and examined species. With weighted overlay thickets were not found in areas where canopy density and spruce volume were naturally low. Research is needed to verify thicket’s occupation with trail cameras. The ecological impacts on the research area by saving thickets require evaluation.