Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Subject "mallintaminen"

Sort by: Order: Results:

  • Kallio, Varpu (2014)
    The purpose of this study is to evaluate patients' quality of life and healthcare use before and after bariatric surgery and produce new, clinical data-based information on the cost-effectiveness of bariatric surgery. Healthcare resources are limited and expenditures have grown from year to year. Therefore it is important to make cost-effectiveness evaluations so that financial resources could be allocated properly. The research population consists of patients who have undergone gastric bypass or sleeve gastrectomy in the Hospital District of Helsinki and Uusimaa, during the years 2007-2009. The study population consists of 147 gastric bypass patients and 79 sleeve gastrectomy patients. In this study the decision analytic model, used in the Finohta study "Sairaalloisen lihavuuden leikkaushoito" was updated using actual, up-to-date information. The analysis was done using a decision tree and a Markov model with a time horizon of 10 years. The cost data in this study was based on actual data for the first two years after surgery. A forecast model was used to predict the costs for the years 3-10 after surgery. Patients' quality of life scores were based on real data for the years 1 (the year of operation) to 4. Quality of life scores for the other years were predicted. In the literature review section, international studies on the cost-effectiveness of bariatric surgery and its impacts on drug therapy were evaluated. The studies showed that the use of medicines, which were used to treat obesity-related diseases were lower in the surgery group. However, drugs used to treat vitamin deficiencies, depression and gastrointestinal diseases were higher in the surgery group. Most studies found that surgery is the most cost-effective way to treat morbid obesity. This study confirms the role of the bariatric surgery in the treatment of morbid obesity in Finland. Even though the healthcare costs were increased in the first two years after the operation, the conclusions of the Finohta study didn't change. The bariatric surgery is cheaper and more effective than ordinary treatment and the most cost-effective way to treat morbid obesity. The mean costs were 30 309 € for the gastric bypass, 31 838 € for the sleeve gastectomy and 36 482 € for ordinary treatment. The mean numbers of quality-adjusted life-years were 6.919 for the gastric bypass, 6.920 for the sleeve gastrectomy and 6.661 for ordinary treatment. However, there is demand for more information for the long-term effects, benefits and risks of the surgery. How much the surgery will actually save money, will be hopefully clarified in the long-term follow-up study, which should also include an actual control group.
  • Niemelä, Kirsi (2011)
    The aim of this study was to develop mathematical energy balance models for early and middle lactation period of dairy cows. The traits for predicting were information of diet, feed, milk production, milk composition, body weight and body condition score. This study was a part of development work of KarjaKompassi-project. The data used in this study was based on 12 feeding experiments performed in Finland. The complete data from the studies included 2647 weekly records from multiparous dairy cows and 1070 weekly records from primiparous dairy cows. The data was collected from calving to 8-28 weeks of lactation. Three-fourths of the totals of 344 dairy cows were Finnish Ayshire cows and the rest of the cows were Friesian Cattle. The cows were fed by the Finnish feeding standards. The data was handled by the Mixed-procedure of the SAS-programme. The outliers were removed with Tukey´s method. The relationship between energy balance and predictor traits was studied with correlation analysis. The regression analysis was used to predicting energy balance. To quantify the relationship of lactation day to energy balance, 5 functions were fitted. The random factor was a cow in the experiment. The model fit was assessed by residual mean square error, coefficient of determination and Bayesian information criterion. The best models were validated in the independent data. Ali-Schaeffer achieved the highest fit functions. It was used by the basal model. The error in every model grew after the 12th lactation week, because the number of records decreased and energy balance turned positive. The proportion of concentrate in the diets and concentrate dry matter intake index were the best predictors of energy balance from traits of diet. Milk yield, ECM, milk fat and milk fat-protein ratio were good predictors during lactation period. The RMSE was lower when ECM was standardized. The body weight and body condition score didn’t improve the predictive value of the basal model. The models can be used to predict energy balance in the herd level, but they are not applicable for predicting individual cow energy balance.
  • Airola, Sofia (2014)
    The subject of this thesis was to evaluate the capability of the FEMMA model in simulating the daily nitrogen load from a forested catchment. For that FEMMA was tested in a forest plot in Hyytiälä, Juupajoki. The modeling results of the concentrations of ammonium, nitrate and dissolved organic nitrogen in the runoff water were compared to the measured values of those. This work presents the current state of knowledge concerning the most significant nitrogen processes in forest soil, as reported in the literature. It also lists some alternative models for simulating nitrogen and evaluates the uncertainties in the modelling critically. As a result FEMMA was found not to be suitable for simulating daily nitrogen load from this catchment. The simulated results didn’t correspond to the measured values. The most significant factors to develop in FEMMA found in this study were the parametrization of the gaseous nitrogen losses from the system, re-examining the nitrogen uptake by plants and developing the computing of the fractions of nitrogen released in decomposition. For future research it would be important to decide if it is meaningful to simulate the daily nitrogen leaching with process-based models at all. At least in the Hyytiälä site the amount of leached nitrogen is so small compared to the nitrogen in other processes that it’s quite challenging to simulate it accurately enough.
  • Honkanen, Henri (2022)
    Remote sensing brings new potential to complement environmental sampling and measuring traditionally conducted in the field. Satellite images can bring spatial coverages and accurately repeated time-series data collection to a whole new level. While developing methos for doing ecological assessment from space in situ sampling is still in key role. Satellite images of relatively coarser pixel size where individual plants or trees are not possible to separate usually utilize vegetation indices as proxies for environmental qualities and measures. One of the most extensively used and studied vegetation index is Natural Difference Vegetation Index (NDVI). It is calculated as normalized ratio between red light and near-infra-red radiation with formula: NDVI=NIR- RED/NIR+RED. Index functions as a measure for plant productivity, that has also been linked to species-level diversity. In this thesis MODIS NDVI (MOD13Q1, 250 m x 250 m resolution) and selected additional variables were examined through their predictive power for explaining variation in tree species richness in six different types of moist tropical evergreen forests in the province of West Kalimantan, on the island Borneo in Indonesia. Simple and multiple regression models were built and tested with main focus on 20- year mean-NDVI. Additional variables used were aboveground carbon, elevation stem count, tree height and DBH. Additional variables were examined initially on individual basis and subsequently potential variables were then combined with NDVI. Results indicate statistically significant, but not very strong predictable power for NDVI (R2=0.25, p-value=2.11e-07). Elevation and number of stems outperformed NDVI in regression analyses (R2=0.64, p-value=2.2e-16 and R2=0.36, p-value=4.5e-11, respectively). Aboveground biomass carbon explained 19% of the variation in tree species richness (p-value=6.136e-06) and thus was the worst predictor selected for multiple regression models. Tree height (R2=0.062, p-value=0.0137) and DBH (R2=0.003, p-value=0.6101) did not show any potential in predicting tree species richness. Best variable combination was NDVI, elevation and stem count (R2=0.71, p-value=2.2e-16). Second best was NDVI, elevation and aboveground biomass carbon (R2=0.642, p-value=2.2e-16), which did not promote for biomass carbon as a potential predictor as model including only NDVI and elevation resulted nearly identically (R2=0.639, p-value=2.2e-16). Model including NDVI and stem count explained 54% of the variation in tree species richness (p-value=2.2e-16) suggesting elevation and stem count being potential variables combined with NDVI for this type of analysis. Problems with MODIS NDVI are mostly linked to the relatively coarse spectral scale which seems to be too coarse for predicting tree species richness. Spectral scale also caused spatial mismatch with field plots as being significantly of different sizes. Applicability in other areas is also limited due to the narrow ecosystem spectrum covered as only tropical evergreen forests were included in this study. For future research higher resolution satellite data is a relevant update. In terms methodology, alternative approach known as Spectral Variability Hypothesis (SVH), which takes into account heterogeneity in spectral reflectance, seems more appropriate method for relating spectral signals to tree species richness.
  • Kuusisto, Joonas (2023)
    The Critical Brain -hypothesis proposes that the brain operates in a phase transition between ordered and disordered state, in the vicinity of the critical point. The hypothesis has its roots in statistical physics and thermodynamics that aim to explain the emergent complexity of the nature and the behaviour of physical systems by investigating their statistical collective properties instead of looking solely at the micro level. Physical criticality can be seen as a fundamental phenomenon where the collective action of systems becomes independent of their microscopical details and many seemingly unrelated systems are characterised by same macro level attributes. Operation at the critical point maximises many attributes beneficial to the information processing capability of the brain such as information capacity, dynamic range and information transfer. Within the past 25 years, the criticality hypothesis has progressed from a theoretical framework to a convincing explanation for the operation of the brain. There’s cumulating evidence from computer simulations, in vitro experiments, animal experiments and from human MEG- and EEG-data. The mathematical background and the path from the depolarization of neurons to statistical inference has been perceived as abstract and difficult. The criticality hypothesis shares moderate attention in clinical use and teaching compared to its possible explanatory power. In this study, basic concepts and history of criticality are introduced by reviewing the relevant literature and showing examples. There’s a chapter for clinical applications. A nested, hierarchical model was built to explore the functional and structural dimensions of the brain. Common algorithms for computing outcome measures were implemented in-house. Under a linear topology, the model functioned as a non-nested model which complied with the hypothesis. The results suggest that the model is a viable platform for exploring the contribution of structure to brain dynamics. Biologically motivated structural modeling can have possible future clinical application for the understanding, diagnosis and the assessment of treatment outcomes in neurological and psychiatric disorders. This study can possibly raise awareness for the criticality hypothesis as there’s a limited number of reviews that go through the mathematical concepts, basic research and clinical examples.
  • Nystedt, Ari (2019)
    The modern, intensive silviculture has affected negatively to the grouses. Main reasons are changes in the ground vegetation and decreasing proportion of blueberry. Main features for grouse habitats are variety in the forest cover and protection from the understorey. In managed forests fluctuation can be increased via thickets. Thicket size varies from couple of trees to approximately two ares. Tickets are uncleared patches containing trees in various sizes. To highlight grouses via game-friendly forest management, information about the habitat is required in the forest site and broader area. Observations about the grouses in the forest site and the information about capercaillie’s lekking sites, willow grouse’s habitats and the wintering areas have been beneficial. Information about grouse densities and population’s fluctuations has been gathered via game triangles. Guide books about game husbandry contain information about grouse habitats and thicket characteristics. The aim of this study was to investigate, whether it is possible to model suitable thickets and grouse habitats with open GIS (Geographical Information Systems) material via GIS- analyses. Examined grouse species in modelling were capercaillie, black grouse and hazel grouse. Weighted Overlay was done with ArcMap- software. Suitable thickets and habitats were examined in the whole research area and in suitable figures. Based on the results of the analysis, theme maps were made to represent the research area’s suitability for thickets and grouse habitats. The needed material for the thickets was collected and GIS- analyses were made in the research area in Tavastia Proper, Hausjärvi. For the research, 12 one-hectare squares were created. Together 45 suitable areas for thickets were charted via field inventory. After the field inventory and GIS- analyses, the results were compared. Key figures from the tickets were number of the thickets, areas, distance to the nearest thicket, averages and standard deviations. Statistical methods were applied to examine possible statistically significant differences between areas and between distances to the nearest thicket. Performed tests were One-Way ANOVA and Kruskall-Wallis. Grouse habitat’s tree characteristics were examined with up-to-date forest management plan. Tree characteristics were examined from 17 suitable figures, covering total area of 42,6 hectares. In field inventory, the average amount of found thickets in research grid was 3,8 and with modelling 1,4. The average area of thicket was 76,9 m2 in field inventory and 252 m2 in modelling. The average distance between thickets was 12,6 meters in field inventory and 24,8 meters with modelling. In field inventory thickets covered approximately 2,9 percent and modelled 3,6 percent of the research grid’s total area. According to statistical analyses, there was statistically significant difference between the inventory method to the total thicket area and distance to the nearest thicket. According to the modelling and forest management plan, capercaillie’s habitats were located in mature pine stands. Black grouse habitats were located in spruce dominated, young forest stands. Hazel grouse habitats included high proportion of broad-leaved trees, which were visible in ecotones between forest and field. Common for capercaillie, black grouse and hazel grouse habitats were minor surface area and mosaic-like structure. As a result, thickets and grouse habitats can be modeled with open GIS-material. However, modelling requires knowing the characteristics of thickets and examined species. With weighted overlay thickets were not found in areas where canopy density and spruce volume were naturally low. Research is needed to verify thicket’s occupation with trail cameras. The ecological impacts on the research area by saving thickets require evaluation.