Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Skullbacka, Simone (2019)
    Many drugs are associated with the risk of QT prolongation and torsades de pointes (TdP). The risk increases with other risks factors for QT prolongation. Recognizing risk factors and QT prolonging drugs is critical in the management of this drug-related problem. The aim of this master’s thesis was to study the prevalence of use of QT prolonging drugs in older adults receiving home care. Additionally, the aim was to study concomitant use of QT prolonging drugs as well as clinically significant QT prolonging drug-drug interactions in the participants. The secondary objective was to study the most commonly used QT prolonging in the participants. The material used in this master’s thesis originated from a randomized controlled trial in City of Lohja, Finland, which enhanced a coordination in medication risk management for older home care clients. The analysis of the baseline data collected in fall 2015 was only deepened regarding QT prolonging drugs. The participants (n=188) were older adults (≥65 years) receiving regular home care from City of Lohja, randomized into an intervention group (n=101) and a control group (n=87). The majority of the participants were women (69%). The mean age of the participants was 83 years. Data on the participants’ drugs were collected from their medication lists. Clinically significant drug-drug interactions were identified using the SFINX database. The QTDrugs Lists of CredibleMeds were used for identifying drugs associated with QT prolongation and TdP. On average, the participants (n=188) used 2.3 drugs (SD 1.3, median 2.0) associated with QT prolongation and TdP. Of the participants, 36% (n=67) used drugs with known risk of TdP (QTDrugs List 1). The most commonly used drugs with known risk of TdP were donepezil and citalopram. The prevalence of QTDrugs List 2 drugs (possible risk of TdP) was 36% (n=67). Most of the participants (n=156, 83%) used drugs which under certain circumstances are associated with TdP (QTDrugs List 3). One fifth (21%) of the participants used concomitantly 2-3 drugs associated with QT prolongation and TdP. QT prolonging drugdrug interactions (SFINX-D interactions) were found in 3% of the participants. The drugs involved in the drug-drug interactions were donepezil, (es)citalopram and haloperidol. The prevalence of use of clinically relevant QT prolonging drugs (QTDrugs Lists 1-2) was higher in this study compared with the prevalence in outpatients in previous studies. Concomitant use of QT prolonging drugs is common in outpatients. Health care professionals need to be educated on the risks of QT prolongation, TdP and the risks of using QT prolonging drugs concomitantly. Risk assessment tools considering patient-specific risk factors could be more widely used, as they may reduce modifiable risk factors, and actual events of QT prolongation and TdP may be avoided. There is a need for systematic procedures for assessing and managing the risks of QT prolongation and TdP in the Finnish health care system.
  • Torres Fernández de Castro, Jose Guillermo (2015)
    This thesis is an attempt to find alternative ways of approaching the study of values and political attitudes. The theoretical framework used for this purpose is Schwartz basic human values theory. Value profiles are elaborated for ten individual interviews and one focus group. Quantitatively, the Schwartz Value Questionnaire produced scores for each participant. Using Qualitative Content Analysis (QCA) a different profile based on quotations is generated. The results suggest that both measurements inform about the priorities of the interviewee, and that numeric scores can be helpful to understand the relevance of certain political attitudes expressed in the semi-structured interviews, such as the perceived dimensions of political competition. Additionally, data from five focus groups, conducted with participants from five different municipalities of the State of Mexico, was analyzed using QCA. The qualitative as well as the quantitative differences between the five groups suggest that this method, combined with the framework of Schwartz basic human values, produces meaningful results that can be related to the socioeconomic profiles of the municipalities.
  • Lehtinen, Sami (2016)
    This work is about the qualitative theory of autonomous ordinary differential equation (ODE) systems. The purpose of the work is threefold. First, it is intended to familiarize the reader with the essential theory of autonomous systems in dimension n. Second, it is hoped that the reader will learn the importance of planar autonomous systems, such the beautiful result of the Poincaré-Bendixson theorem. Third, since the theory is utilised in applied science, considerably space has been devoted to analytical methods that are used widely in applications. The fundamental theory of existence and uniqueness of solutions to ODE systems are presented in Chapter 2. Then, Chapter 3 treats with the essential theory of autonomous systems in dimension n, such as the orbits and the limit sets of solutions. In Chapter 4 we consider planar autonomous systems. What makes planar systems different from higher dimensions is the existence of Jordan Curve theorem, which has made it possible for the theory to go much further. In particular, the Poincaré-Bendixson theorem, which is a statement about the long-term behavior of solutions to an autonomous system in the plane. Note that the Jordan Curve theorem is stated without proof, since the proof is terribly difficult but the result is obvious. Lastly, in order not to lose sight of the applied side of the subject, Chapters 5 and 6 are devoted to analytical methods of autonomous systems. First, Chapter 5 treats with local stability analysis of an equilibrium. Then, in Chapter 6 we work with a relatively large study of an abnormal competing species model based on the science fiction movie The Terminator (1984), which should be taken with a pinch of salt. In its dystopian world there are two powerful forces of Men and the Terminator cyborgs trying to get completely rid of one another. Lack of space has, however, forced us to simplify some of the individual behaviour. These simplifications are partly justified by the fact that the purpose is to present how the theory can be applied even in a (hopefully) fictional situation and, of course, to answer the puzzling question whether the human race would stand a chance against the Terminators.
  • Store, Joakim (2020)
    In software configuration management, branching is a common practice, which can enable efficient parallel development between developers and teams. However, the developers might not be aware of the different branching practice options and how to exactly formulate a branching strategy. This could lead to an opposite effect towards productivity, and other issues as well. The focus of this thesis is in what branching practices are considered as beneficial, what affects their usability, what risks are involved, and how to plan these practices in a structured manner. There are plenty of branching practices presented in the literature, which can either complement each other or be completely incompatible. A lot of the practices' beneficiality depends on the surrounding context, such as the tools in use and project characteristics. The most relevant risk to branching is merge conflicts, but there are other risks as well. The approaches for planning a branching strategy, however, are found to be too narrow in the reviewed literature. Thus, Branching Strategy Formulation and Analysis Method (BSFAM) is proposed to help teams and organizations plan their branching strategy in a structured manner. Additionally, the issues of branching are explored in the context of an organization that has multiple concurrent projects ongoing for a single product. Information on this is gathered through a survey, semi-structured interviews, and available documentation. The issues that were found can be attributed to a lack of proper base strategy, difficulties in coordination and awareness, and test automation management in relation to branching. The proposed method is then applied in that same context in order to provide solutions to the organization's issues, and to provide an example case. BSFAM will be taken into use in upcoming projects in the organization, and it will be improved if necessary. If the proposed method is to be adopted more widely and its resulting information published, it could provide further research towards how different branching practices fit in different contexts. Additionally, it could help in new, generally better, branching practices to emerge.
  • Ylinen, Tuike (2019)
    Pharmaceutical industry is supervised by several competent authorities. These authorities all over the world inspect manufacturers in order to make sure they comply with the Good Manufacturing Practice (GMP) guidelines and produce quality products. If non-compliance with the guidelines is detected, the authorities can revoke manufacturing licenses and deny access of the products. Recent trend in pharmaceutical industry is that the Active Pharmaceutical Ingredient (API) manufacturing is concentrated in few factories. If this kind of manufacturer is declared non-compliant and is therefore unable to supply an API, it can lead to drug shortages. This research aimed to find out what kind of quality problems occur in API manufacturing. Because of the concentration trend, it is important to understand what kind of problems the manufacturers do struggle with to prevent any risk for shortages. This research aimed also to determine how much the quality problems in API manufacturing can impact on drug shortages. Also, the number and location of these non-compliance cases were investigated. The chosen time frame was 2016-2018. Several databases were used as information sources in this research. These databases are maintained by the authorities in the U.S. and Europe and they contain information about the inspections and the GMP deficiencies they have found during these inspections. With the information collected from the databases, an inductive content analysis was conducted to determine the reasons for non-compliance with GMP in API manufacturing. Other information (e.g. locations, names of APIs) was also collected from the databases and analysed to answer the rest of the research questions. Results show that the biggest problem areas in API manufacturing were data integrity and analytical testing. Other problems relating to documentation occurred also. The amount of these cases was quite stable, and the relative proportion declined during the time period. Comparison between the list of APIs and drug shortage databases showed that even over 30% of the non-compliant APIs were later in shortage. The effect was greater in Finland than in the U.S. Therefore, it was concluded that the most significant GMP deficiencies in API manufacturing were poor data integrity and inappropriate analytical testing procedures. Secondly, the number of non-compliance cases in API manufacturing has not increased during this time, but these problems may have had an impact on drug availability problems.
  • Wallenius, Tarja (2010)
    In this study, a quality assessment method based on sampling of primary laser inventory units (microsegments) was analysed. The accuracy of a laser inventory carried out in Kuhmo was analysed as a case study. Field sample plots were measured on the sampled microsegments in the Kuhmo inventory area. Two main questions were considered. Did the ALS based inventory meet the accuracy requirements set for the provider and how should a reliable, cost-efficient and independent quality assessment be undertaken. The agreement between control measurement and ALS based inventory was analysed in four ways: 1) The root mean squared errors (RMSEs) and bias were calculated. 2) Scatter plots with 95% confidence intervals were plotted and the placing of identity lines was checked. 3) Bland-Altman plots were drawn so that the mean difference of attributes between the control method and ALS-method was calculated and plotted against average value of attributes. 4) The tolerance limits were defined and combined with Bland-Altman plots. The RMSE values were compared to a reference study from which the accuracy requirements had been set to the service provider. The accuracy requirements in Kuhmo were achieved, however comparison of RMSE values proved to be difficult. Field control measurements are costly and time-consuming, but they are considered to be robust. However, control measurements might include errors, which are difficult to take into account. Using the Bland-Altman plots none of the compared methods are considered to be completely exact, so this offers a fair way to interpret results of assessment. The tolerance limits to be set on order combined with Bland-Altman plots were suggested to be taken in practise. In addition, bias should be calculated for total area. Some other approaches for quality control were briefly examined. No method was found to fulfil all the required demands of statistical reliability, cost-efficiency, time efficiency, simplicity and speed of implementation. Some benefits and shortcomings of the studied methods were discussed.
  • Rantanen, Milla-Maarit (2020)
    Semiconductor radiation detectors are devices used to detect electromagnetic and particle radiation. The signal formation is based on the transportation of charges between the valence band and conduction band. The interaction between the detector material and the radiation generates free electrons and holes that move in opposite directions in the electric field applied between the electrodes. The movement of charges induces a current in the external electrical circuit, which can be used for particle identification, measurement of energy or momentum, timing, or tracking. There are several different detector materials and designs and, new options are continuously developed. Diamond is a detector material that has received a great amount of interest in many fields. This is due to its many unique properties. Many of them arise from the diamond crystal structure and the strength of the bond between the carbon atoms. The tight and rigid structure makes diamond a strong and durable material, which allows operation of diamond detectors in harsh radiation environments. This, combined with the fast signal formation and short response time makes diamond detector an excellent choice for high energy physics applications. The diamond structure leads also to a wide band gap. Thanks to the wide band bap, diamond detectors have low leakage current and they can be operated even in high temperatures without protection from surrounding light. Especially electrical properties of semiconductors strongly depend on the concentration of impurities and crystal defects. Determination of electrical properties can therefore be used to study the crystal quality of the material. The electrical properties of the material determine the safe operational region of the device and knowledge of the leakage current and the charge carrier transportation mechanism are required for optimized operation of detectors. Characterization of electrical properties is therefore an important part of semiconductor device fabrication. Electrical characterization should be done at different stages of the fabrication in order to detect problems at an early stage and to get an idea of what could have caused them. This work describes the quality assurance process of single crystal CVD (chemical vapour deposition) diamond detectors for the PPS-detectors for the CMS-experiment. The quality assurance process includes visual inspection of the diamond surfaces and dimensions by optical and cross polarized light microscopy, and electrical characterization by measurement of leakage current and CCE (charge collection efficiency). The CCE measurement setup was improved with a stage controller, which allows automatic measurement of CCE in several positions on the diamond detector. The operation of the new setup and the reproducibility of the results were studied by repeated measurements of a reference diamond. The setup could successfully be used to measure CCE over the whole diamond surface. However, the measurement uncertainty is quite large. Further work is needed to reduce the measurement uncertainty and to determine the correlation between observed defects and the measured electrical properties.
  • Aaltonen, Serja (Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2007)
    ALICE (A Large Ion Collider Experiment) is an experiment at CERN (European Organization for Nuclear Research), where a heavy-ion detector is dedicated to exploit the unique physics potential of nucleus-nucleus interactions at LHC (Large Hadron Collider) energies. In a part of that project, 716 so-called type V4 modules were assembles in Detector Laboratory of Helsinki Institute of Physics during the years 2004 - 2006. Altogether over a million detector strips has made this project the most massive particle detector project in the science history of Finland. One ALICE SSD module consists of a double-sided silicon sensor, two hybrids containing 12 HAL25 front end readout chips and some passive components, such has resistors and capacitors. The components are connected together by TAB (Tape Automated Bonding) microcables. The components of the modules were tested in every assembly phase with comparable electrical tests to ensure the reliable functioning of the detectors and to plot the possible problems. The components were accepted or rejected by the limits confirmed by ALICE collaboration. This study is concentrating on the test results of framed chips, hybrids and modules. The total yield of the framed chips is 90.8%, hybrids 96.1% and modules 86.2%. The individual test results have been investigated in the light of the known error sources that appeared during the project. After solving the problems appearing during the learning-curve of the project, the material problems, such as defected chip cables and sensors, seemed to induce the most of the assembly rejections. The problems were typically seen in tests as too many individual channel failures. Instead, the bonding failures rarely caused the rejections of any component. One sensor type among three different sensor manufacturers has proven to have lower quality than the others. The sensors of this manufacturer are very noisy and their depletion voltage are usually outside of the specification given to the manufacturers. Reaching 95% assembling yield during the module production demonstrates that the assembly process has been highly successful.
  • Bärlund, Hanna-Maria (2011)
    Since the beginning of the 1990s the emphasis of participatory democracy has become stronger in Finnish policy- and decision-making. This development involves various stakeholders participating in negotiations, or more specifically deliberations, around current issues in order to reach consensus and enable a continuance in the policy process. According to research, the more consensual a democracy is the more favourable are the policy outcomes towards environmental issues. The three case studies investigated, ie. the Forest Biodiversity Programme for Southern Finland, the Working Group on Renewable Energy, and the Natura 2000 Network of European Union nature protection areas, support this notion. The case studies are focused on how the key players involved have conceived the decision-making process in terms of achieved goals and degree of agreement as well as on the specific issue context as a backdrop to the development of policy. The cases displayed significant differences of outcomes depending on the achieved level of consensus and deliberation. The outcomes are analysed within the theoretical frameworks of Arend Lijpharts consensus vs majoritarian model of democracy and Martin Jänickes consensual capacity for ecological modernisation. Further, applying Joshua Cohens theory of deliberative democracy and his suggestions for achieving ideal deliberation, the results suggest that the connection between consensus democracy and more effective environmental conservation policy is not that clear-cut. Nevertheless, consensus democracy provides a promising point of departure for overcoming the main disputes between the stakeholders, and common starting points and general goals to be agreed on, which is crucial in order for any progress in environmental conservation to take place.
  • Sin, Ngor Shek (2020)
    “Excessive delays in the delivery of justice are equal to the denial of justice”. This is a sentence that has repeatedly appeared in different reports, research papers, comments and opinions circling the UN internal justice system. This article attempts to seek answers to the question whether it is possible to assess the quality of justice of the UN internal justice system in different ways established in national and supra-national organizations.
  • Laine, Katarina (2015)
    Objective of this article was to analyze whether the reporting of the 3rd and 4th degree obstetric anal sphincter injuries differ between the patient data recording systems. The study was retrospective. The setting included all six delivery units in the Hospital District of Helsinki and Uusimaa (HUS) comprising one third of all deliveries in Finland. Population was all deliveries in HUS in 2012 (n=18099). The incidence of the sphincter injury was extracted from three electronic medical record (EMR) systems (Obstetrix, Opera and Oberon), using the national versions of International Classification of Diseases 10th revision (ICD10) and the Nordic Classification of Surgical Procedures (NOMESCO). All observed cases were studied carefully from the patient records and the reliability of different systems was analyzed and compared to the data reported to national registers (MBR Medical Birth Register and HDR Hospital Discharge Register). Main outcome measure was sphincter injury rate in delivery units. We found that the actual rate of sphincter injury in all the EMRs combined in HUS was higher (1.8%) than the rate delivered from any single reporting system (from 1.5% to 1.7%) and varied even more among single delivery units. The coverage in the MBR (88%) was much higher than in the HDR (3%). In conclusion the simultaneous use of several patient data recording systems is confusing and prone for systematic errors. One common database – preferably an EMR with a structured format - would clarify the registering and enable reliable quality reports creating a sustainable base for quality improvements.
  • Abbas, Hassan (2018)
    Mobile users surpassing desktop users have tempted mobile network operators to deploy traffic shaping policies to utilize the resources efficiently. These policies have significantly lowered the quality of service of applications. Present systems can accurately detect the traffic discrimination of different application protocols for instance BitTorrent in context to HTTP protocol and extract the quality of the service statistically by comparing the data. This thesis proposes a method that tries to understand the system performance and applications behavior along with the network performance in requesting and delivering the desired quality of service. We devised a framework which tests an MNO (Mobile Network Operator) and their policies on the 4G Network regarding the Type of Service flags in the IP Header. We investigate whether the network path allows applications like Skype, WhatsApp, Facebook Messenger and Viber to set the Type of Service (DSCP Class) in their IP Header. We implemented the framework as an Android application which sets the DSCP Class in the IP Header for each respective application’s data. Our results show that major mobile network operators in Finland do not allow the applications to set DSCP classes in their IP Header for the better quality of service.
  • Nousiainen, Katri (2018)
    The human brain is divided into left and right hemisphere, and there are functional differences between the hemispheres. A hemispheric difference is called the lateralization of the brain function, and the degree of lateralization is described by the laterality index. The most investigated domain of the lateralized brain functions is language, which is a left hemisphere dominant function in the majority of the population. Functional magnetic resonance imaging provides a noninvasive method for studying the brain functions indirectly through the bloodoxygenation-level-dependent effect. The language-related functional magnetic resonance imaging can be used in the localization of the Broca’s speech area and determination of the dominant hemisphere in epileptic patients. The purpose of this thesis is to assess a method for calculating the laterality index from functional magnetic resonance imaging data. The data is acquired during three language task paradigms with five subjects and analyzed statistically. The methods used for the laterality index calculations are reviewed, and a new calculation method is presented. Result tables of laterality indices and hemispheric dominances per used regions of interest are generated. The presented laterality index calculation method successfully determined the speech laterality of three subjects out of five as a left hemispheric dominance. The language laterality of two subjects was not successful due to corrupted functional data and contradicted results between different paradigms. The major source of error is the subject’s head motion during the functional imaging. Together with the information about the head motion’s extent, the generated table could provide relevant extra information to epileptic patients’ functional magnetic resonance imaging data and could serve for clinical purposes in the future.
  • Holmström, Oscar; Linder, Nina; Lundin, Mikael; Moilanen, Hannu; Suutala, Antti; Turkki, Riku; Joensuu, Heikki; Isola, Jorma; Diwan, Vinod; Lundin, Johan (2015)
    Introduction: A significant barrier to medical diagnostics in low-resource environments is the lack of medical care and equipment. Here we present a low-cost, cloud-connected digital microscope for applications at the point-of-care. We evaluate the performance of the device in the digital assessment of estrogen receptor-alpha (ER) expression in breast cancer samples. Studies suggest computer-assisted analysis of tumor samples digitized with whole slide-scanners may be comparable to manual scoring, here we study whether similar results can be obtained with the device presented. Materials and methods: A total of 170 samples of human breast carcinoma, immunostained for ER expression, were digitized with a high-end slide-scanner and the point-of-care microscope. Corresponding regions from the samples were extracted, and ER status was determined visually and digitally. Samples were classified as ER negative (<1% ER positivity) or positive, and further into weakly (1-10% positivity) and strongly positive. Interobserver agreement (Cohen's kappa) was measured and correlation coefficients (Pearson's product-momentum) were calculated for comparison of the methods. Results: Correlation and interobserver agreement (r = 0.98, p < 0.001, kappa = 0.84, CI95% = 0.75-0.94) were strong in the results from both devices. Concordance of the point-of-care microscope and the manual scoring was good (r = 0.94, p < 0.001, kappa = 0.71, CI95% = 0.61-0.80), and comparable to the concordance between the slide scanner and manual scoring (r = 0.93, p < 0.001, kappa = 0.69, CI95% = 0.60-0.78). Fourteen (8%) discrepant cases between manual and device-based scoring were present with the slide scanner, and 16 (9%) with the point-of-care microscope, all representing samples of low ER expression. Conclusions: Tumor ER status can be accurately quantified with a low-cost imaging device and digital image-analysis, with results comparable to conventional computer-assisted or manual scoring. This technology could potentially be expanded for other histopathological applications at the point-of-care.
  • Toropainen, Siiri (2020)
    Human induced pluripotent stem cells (hiPSC) can be propagated in a long-term culture and further differentiated into many cell types, including cardiomyocytes (CM) and endothelial cells (EC). Human induced pluripotent stem cell derived cardiomyocytes (hiPSC-CM) are promising tools in cardiac research, since they retain the original genotype of the individual donor and thus enable the use of patient- and disease specific cells. Crucial for the optimal use of hiPSC-CMs in experiments are methods for assessing cardiomyocyte phenotype. Contraction is a prominent feature for CMs, and it is essential that contraction can be quantified accurately. Reliable quantification is relevant when hiPSC-CMs are used for studying disease phenotypes, cardiac safety pharmacology, genotype-phenotype correlations, cardiac disease mechanisms and cardiac function over time. In this thesis project, contractile behavior of hiPSC-CMs was analyzed using video microscopy and online tool MUSCLEMOTION. Contraction parameters were obtained from hiPSC-CMs derived from patients with hypoplastic left heart syndrome (HLHS) and healthy controls on multiple timepoints during differentiation. In addition, contraction was analyzed in iPSC-CMs cocultured with induced pluripotent stem cell derived endothelial cells (iPSC-ECs), since it has been suggested that ECs can promote morphological and functional maturation of CMs in culture. Contraction duration (CD), time to peak (TTP), relaxation time (RT) and contraction amplitude (CA) was compared between different timepoints as well as between CMs cocultured with ECs and CMs cultured alone. Compared to control cell lines, HLHS patient hiPSC-CMs exhibited longer CD, TTP and RT as well as higher CA values. This difference was present in most of the timepoints, suggesting slower contractile kinetics in HLHS patient iPSC-CMs compared to control iPSC-CMs. Significant changes were also observed in contraction parameters when comparing hiPSC-CMs in coculture and monoculture. Contraction parameters of coculture iPSC-CMs changed in a relatively consistent manner over time, increasing or decreasing throughout the monitoring period whereas in hiPSC-CM monoculture there was more variation between timepoints. This project and results support the use of modern methods in detailed functional characterization of hiPSC-derived cells. In addition, it highlights the potential of coculture in disease modeling and the fact that hiPSC-CMs express variation in phenotypes. However, experiments should be repeated, and additional methods should be used in order to further validate the results and conclusions.
  • Lampinen, Anniina (2021)
    The natural carbon cycle is affected by human activity. Terrestrial carbon stocks have been decreasing as at the same time carbon dioxide concentration in the atmosphere has increased causing climate change. The Paris Agreement sets the target to limit climate change to 1.5°C and to reach that goal, all possible mitigation practises should be included into global framework to avoid the most serious consequences of warming. Carbon sequestration into natural soil and biomass could be one mitigation practice. To enhance carbon sequestration activities and to include natural carbon stocks into to the EU climate policy, it would be necessary to quantify stock sizes and changes in those stocks. For developing carbon trading markets, the quantification methods should provide accurate results and at the same time be practical and financially achievable. Used research method in this thesis was comparatively literature survey and aim was to gather and compere information about currently used carbon stock quantification methods against developing carbon trading markets. Soil carbon stocks can be quantified with direct soil sampling, spectroscopic sensing methods or by mathematical models. Biomass carbon stocks can be quantified with inventory-based field measurements and modelling and by remote sensing. The full carbon budget on the ecosystem level can be achieved with carbon flux measurements. Quantification of different terrestrial carbon stocks and their changes is not a simple task. There is a lot of variation between different stocks and in some cases, the stock changes occur slow. Cost of carbon stock quantification depends on the accuracy, size of the area under focus and frequency of the measures. Methods for terrestrial carbon stock quantification are dependent on high quality data and there is demand for research considering carbon sequestration. For carbon offsetting purposes of developing carbon markets, the modelling approach is achievable, cost efficient, repeatable and transparent. There is no perfect model or one universal model that would fit to every situation and thus the differences must be known. At this stage, this approach could be one possibility to include small scale projects and enhance climate actions. Different quantification methods provide information which can be used to different method developments and to increase accuracies. It’s important to know, how all information can be effectively utilized.
  • Sirviö, Robert (2016)
    Measuring risk is mandatory in every form of responsible asset management; be it mitigating losses or maximizing performance, the level of risk dictates the magnitude of the effect of the strategy the asset manager has chosen to execute. Many common risk measures rely on simple statistics computed from historic data. In this thesis, we present a more dynamic risk measure explicitly aimed at the commodity futures market. The basis of our risk measure is built on a stochastic model of the commodity spot price, namely the Schwartz two-factor model. The model is essentially determined by a system of stochastic differential equations, where the spot price and the convenience yield of the commodity are modelled separately. The spot price is modelled as a Geometric Brownian Motion with a correction factor (the convenience yield) applied to the drift of the process, whereas the convenience yield is modelled as an Ornstein-Uhlenbeck process. Within this framework, we show that the price of a commodity futures contract has a closed form solution. The pricing of futures contracts works as a coupling between the unobservable spot price and the observable futures contract price, rendering model fitting and filtering techniques applicable to our theoretic model. The parameter fitting of the system parameters of our model is done by utilizing the prediction error decomposition algorithm. The core of the algorithm is actually obtained from a by-product of a filtering algorithm called Kalman filter; the Kalman filter enables the extraction of the likelihood of a single parameter set. By subjecting the likelihood extraction process to numerical optimization, the optimal parameter set is acquired, given that the process converges. Once we have attained the optimal parameter sets for all of the commodity futures included in the portfolio, we are ready to perform the risk measurement procedure. The first phase of the process is to generate multiple future trajectories of the commodity spot prices and convenience yields. The trajectories are then subjected to the trading algorithm, generating a distribution of the returns for every commodity. Finally, the distributions are aggregated, resulting in a returns distribution on a portfolio level for a given target time frame. We show that the properties of this distribution can be used as an indicator for possible anomalies in the returns within the given time frame.
  • Denham, Sander (2015)
    Pinus taeda is an important timber species both economically and ecologically. In past years there have been severe economic losses, as well as ecological disruption, due to epidemic outbreaks of Dendroctonus frontalis. Resin flow is the first line of defense within conifer species acting as both a physical and chemical barrier to invading pests. This study demonstrates the effectiveness of utilizing aggregation pheromones to attract Ips spp. bark beetles to Pinus taeda plantation stands in order to study the resin flow defense response mechanism. Individual trees were selected to be baited with aggregation pheromones. Trees in close proximity to the baited tree were labeled as monitor trees, and a control was established. Results of a general linear model for the aggregation pheromone attracting Ips spp. beetles indicate that there was a significant different (p<0.0001) between the baited and control trees. Using a repeated measures ANOVA, differences of resin flow exudation in Pinus taeda were considered among varying stand conditions (fertilizer, fire, anc control plots) during the induced Ips spp. bark beetle attack. This study illustrates that different stand conditions elicit more or less of a response of Ips spp. to the baited trees, however, site treatment did not significantly affect resin flow. We conclude that utilizing pheromones to attract Ips spp. bark beetles in an effective technique for studying the resin flow defense in conifers. From a management perspective, it is concerning to see differences in bark beetle activity amond different stand conditions while simultaneously seeing no difference in resin flow defense, making this an important aspect of integrated pest management study, and an area in need of further research.
  • Vázquez Mireles, Sigifredo (2021)
    Piperine represents the major plant alkaloid encountered in various Piperaceae species and has received in recent years considerable attention because of its broad range of favorable biological and pharmacological activities, including antioxidant, immunostimulant, bioavailability-enhancing and anti-carcinogenic properties. The literature part of this thesis gives a selective overview of advanced methods for the quantitative analysis of piperine in plant-base materials, and various approaches employed for instrumental analysis, including spectroscopic, chromatographic, and electrochemical techniques. An effort was made to evaluate the potential of the reported methods based on the analytical figures of merit, such as total sample throughput capacity, analytical range, precision, accuracy, limit of detection and limit of quantification. The objective of the experimental part of the thesis focused on the development of a convenient, robust, simple, efficient and reliable method to quantify piperine in pepper fruits. The analytical method established in this thesis involves liberation of piperine by continuous liquid extraction of ground pepper fruits with methanol, and cleanup of the crude extracts with reversed phase solid phase extraction. Analyte quantitation was accomplished using gradient reversed phase High Performance Liquid Chromatography with mass spectrometric detection, using Electrospray Ionization-Ion Trap Mass Spectrometry. To enable reliable internal standardization, deuterium labelled piperine surrogate (piperine-D10) was synthesized from piperine in three steps in a reasonable overall yield (65 %) and standard-level purity (99.7 %). It may be worth mentioning that the commercial market value of the amount of piperine-D10 synthesized in-house exceeds 167,400 euros. One of the major challenges encountered during the development and optimization of the analytical method was the extreme photosensitivity of piperine and piperine-D10, both suffering in solution extensive photoisomerization upon exposure to ambient light within matter of minutes. This issue was addressed by carrying out all tasks associated with synthesis, sample preparation and analytical measurements under dark conditions. For the preparation of calibrators, a fully automated procedure was developed, being controlled by custom-written injector programs and executed in the light-protected sample compartment of a conventional autosampler module. In terms of merits, the developed analytical method offers good sample throughput capacity (run time 20 min, retention time 8.2 min), excellent selectivity and high sensitivity (Limit of Detection= 0.012 ppm, Limit of Quantification= 0.2 ppm). The method is applicable over a linear range of 0.4 to 20 ng of injected mass (r2= 0.999). The stability of standards and fully processed samples was found to be excellent, with less than 5% of variations in concentrations occurring after a 3-week (calibrators) or 4-month (samples) storage at 4 °C and 23 °C respectively, under dark conditions. Intra-day repeatability were better than 2.95 %. Preliminary validation data also suggest satisfactory inter-operator reproducibility. To test the applicability of the developed LC-MS method, it was employed to quantify piperine in a set of 15 pepper fruit samples, including black, white, red and green varieties of round and long peppers, purchased from local markets and retailers. The piperine contents obtained were in the range of 17.28 to 56.25 mg/g (piperine/minced sample) and generally in good agreement with the values reported in the scientific literature. It is justified to assume that the developed analytical method may directly be applicable to the quantitation of related pepper alkaloids in herbal commodities, and after some modifications in the sample preparation strategy, also for the monitoring of piperine in biological fluids, such as serum and urine.
  • Kallonen, Kimmo (2019)
    Quarks and gluons are elementary particles called partons, which produce collimated sprays of particles when protons are collided head-on at the Large Hadron Collider. These observable signatures of the quarks and gluons are called jets and are recorded by huge particle detectors, such as the Compact Muon Solenoid. The reconstruction of the jets from detector signals attempts to trace the particle-level information all the way back to the level of the initial collision event with the initating partons. Jets originating from gluons and the three lightest quarks are very similar to each other, only exhibiting subtle differences caused by the fact that gluons radiate more intensely. Quark/gluon jet discrimination algorithms are dedicated to identifying these two types of jets. Traditionally, likelihood-based quark/gluon discriminators have been used. While machine learning is nothing new to the high energy physics community, the advent of deep neural networks caused an upheaval and they are now being implemented to take on various tasks across the research field, including quark/gluon discrimination. In this thesis, three different deep neural network models are presented and their comparative performance in quark/gluon discrimination is evaluated in seven different bins of varying jet transverse momentum and pseudorapidity. The performance of a likelihood-based discriminator is used as a benchmark. Deep neural networks prove to provide excellent performance in quark/gluon discrimination, with a jet image-based visual recognition model being the most robust and offering the largest performance improvement over the benchmark discriminator.