Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • González Latorre, Eduardo (2015)
    Field work is needed to obtain reliable estimates when forest inventories are carried out. Field measurements traditionally have been the main source of information for inventories. But nowadays, also remotely sensed data collected using active or passive sensors mounted on satellite and aerial platforms are used to help in the estimation of forest parameters. Although the use of remotely sensed data is of great help in forest inventories, field data still plays a very important role as reference data, for results calibration and accuracy assessments. Considering that time and budget required for field work are generally some of the main concerns in forest inventory planning, the development of faster, cheaper, simpler, more accurate or more reliable field inventory methods and tools is a topic of great interest. TrestimaTM is a forest inventory system based in the interpretation of images taken with a mobilephone. Its accuracy and efficiency in estimating forest parameters was studied using sample plots in Russian. A total of 156 field plots were measured. The forest parameters measured were: the plot basal area and sample trees’ diameters and heights. The data collected with Trestima was meant to replicate a typical relascope sample plot inventory (variable radius plot inventory). Measurements obtained using traditional tools were used as reference data. The data collected for the inventory included plots at forest stands with different structures: from young to mature stands; and mixed stands to stands dominated by different species (most often Norway Spruce, Picea abies, (L.) H. Karst). The plots’ basal areas ranged from 7 to 62 m2/ha, the tree diameters from 3 to 60 cm and the tree heights from 3 to 35 m. The time used to measure the plots with the Trestima and the reference methods were collected. The data for each forest variable and the time invested in taking the measurements were organized as paired samples and compared using the statistic estimators of bias and RMSE, as the paired Student's t-test. Compared to the reference measurements, Trestima underestimated the basal area with a bias of 1.2 m2/ha (3.7%), but the differences were not statistically significant. In mixed stands, Trestima overestimated spruce basal area (bias of 13.9%), but for spruce dominated stands underestimated it (bias of 4.9%). Trestima overestimated tree diameters with a root mean squared error (RMSE) from 5.5 to 7.9%, depending on the tree species. but underestimated tree heights with an average RMSE of 3.7m (17.5%). The Trestima sample plot measurements were done faster than with traditional tools. Trestima measurements were in average 1.6 minutes (14.8%) faster. The Trestima system provided results comparable to the reference method for all the measured forest parameters. The worse results were obtained for the measurement of the tree heights. The interpretation of the results for the basal area, indicated that the system could benefit from taking into consideration stand structure, especially for species specific estimations. Trestima provided faster measurements of the forest parameters. One important advantage, is that Trestima produces automatically geographically referenced data, which can be used during later analysis, for example, interpretation of remotely sensed data or forest planning.
  • Karim, Abdul (2013)
    Four different Bradyrhizobium sp. (lupin) inoculants were investigated in both greenhouse and field experiments to compare their effects on growth, yield and biological nitrogen fixation. Narrow-leafed and white lupin of the greenhouse experiment compared the strains test in different potting media in a controlled environment, while field experiment tested their performance in field conditions. The greenhouse experiment was conducted with 3 narrow-leafed lupin cultivars (Haags Blaue, Boruta and Sonet) and 1 white lupin cultivar (Ludic). Plants were grown in 3 different potting media (soil, 2 peat : 1 sand and 1 peat : 2 sand) with 5 Bradyrhizobium treatments (uninoculated control, commercial peat inoculant of HAMBI 3118 and liquid cultures of HAMBI 3115, HAMBI 3116 and HAMBI 3118). Plants were grown in a greenhouse unit with average day and night temperature of 22°C and 18°C. Plants were illuminated by using cool white fluorescent tubes maintaining 18 hours day and 6 hours night. In the greenhouse experiment, inoculation significantly increased shoot (117.1-141.9%), root (45.8-64.4%) and nodule (237.0-266.6%) dry weight, plant height (38.3-46%), nodule number (620-659%) and chlorophyll content (29.0-38.5%) over the values found in uninoculated controls. Soil type or potting medium also influenced lupin growth and yield, with better results observed in soil, poorer in 2 peat : 1 sand and poorest in 1 peat : 2 sand. Best performaces were obtaind by inoculating with HAMBI 3115 strain in soil. Uninoculated plants and even inoculated plants grown in peat-sand potting medium, showed relatively poor results, which was more obvious in high-yielding cultivars, Boruta and Ludic, than in low-yielding cultivars, Haags Blaue and Sonet. Inoculation treatments also showed significantly higher shoot (3.15-3.39% N) and root (1.96-2.54% N) nitrogen content. Biological nitrogen fixation rate, measured by the nitrogen difference method, ranged between 87.9 and 90.8% depending on both bacterial strain and host cultivar. The field experiment showed significant increases in shoot (14.4-47.9%), root (11.9-29.1%) and seed (13.8-68.6%) dry weight, plant height (3.6-10.7%), pod plant-1 (10.7-50.6%) and chlorophyll content (5.7-20.7%) following inoculation of the three narrow-leafed lupin cultivars. Uninoculated plants grown in soil in the greenhouse experiment and in the field experiment both produced some nodules, which showed the evidence of presence of indigenous nodule-forming and nitrogen-fixing bacteria. Among the 3 liquid cultures, HAMBI 3115 performed best in terms of lupin growth, yield and biological nitrogen fixation in both greenhouse and field experiments. The performance of the peat-based commercial inoculant of HAMBI 3118 strain exceeded all other inoculants in the field experiment but not in the greenhouse experiment, showing the importance of the carrier. The results indicated that lupin growth and yield are strongly affected by Bradyrhizobium inoculation and soil characteristics. Selection of a suitable Bradyrhizobium strain for inoculation and growing cultivars according to their soil preferences can maximize lupin yield. The suitability of HAMI 3115 for making peatbased inoculants should be tested.
  • Aula, Kasimir (2019)
    Air pollution is considered to be one of the biggest environmental risks to health, causing symptoms from headache to lung diseases, cardiovascular diseases and cancer. To improve awareness of pollutants, air quality needs to be measured more densely. Low-cost air quality sensors offer one solution to increase the number of air quality monitors. However, they suffer from low accuracy of measurements compared to professional-grade monitoring stations. This thesis applies machine learning techniques to calibrate the values of a low-cost air quality sensor against a reference monitoring station. The calibrated values are then compared to a reference station’s values to compute error after calibration. In the past, the evaluation phase has been carried out very lightly. A novel method of selecting data is presented in this thesis to ensure diverse conditions in training and evaluation data, that would yield a more realistic impression about the capabilities of a calibration model. To better understand the level of performance, selected calibration models were trained with data corresponding to different levels of air pollution and meteorological conditions. Regarding pollution level, using homogeneous training and evaluation data, the error of a calibration model was found to be even 85% lower than when using diverse training and evaluation pollution environment. Also, using diverse meteorological training data instead of more homogeneous data was shown to reduce the size of the error and provide stability on the behavior of calibration models.
  • Hou, Jian (2014)
    Pichia pastoris and Saccharomyces cerevisiae are two important fungi in both research and industrial applications of protein production and genetic engineering due to the inherent ability. For example, S.cerevisiae can produce important proteins from wide ranged sugar from ligno-cellulose to methanol. Accurate genome-scale metabolic networks (GMNs) of the two fungi can improve biotechnological production efficiency, drug discovery and cancer research. Comparison of metabolic networks between fungi brings a new way to study the evolutionary relationship between them. There are two basic steps for modeling metabolic networks. The first step is to construct a draft model from existing model or softwares such as the pathway tool software and InterProScan. The second step is model simulation in order to construct a gapless metabolic network. There are two main methods for genome-wide metabolic network reconstruction: constraint-based methods and graph-theoretical pathway finding methods. Constraints-based methods used linear equations to simulate the growth under your model with different constraints. Graph-theoretical pathway finding methods use graphic approach to construct the gapless model so that each metabolite can be acquired from either nutritions or the products of other gapless reactions. In my thesis, a new method designed by Pitkänen [PJH+ 14] is used to reconstruct the metabolic networks of Pichia pastoris and Saccharomyces cerevisiae. Five experiments were developed to evaluate the accuracy of the CoReCo method. The first experiment was to analyze the quality of the GMNs of Pichia pastoris and Saccharomyces cerevisiae by comparing with the existing model. The second and third experiments tested the stability of CoReCo constructed under random mutation and random deletion of the protein sequence simulating noisy input data. The next two experiments were done by considering different number of phylogenetic neighbors in the phylogenetic tree. The last experiment tested the effect of the two main parameters (acceptance and rejection thresholds) when CoReCo filled the reaction gaps in the final step.
  • Joswig, Niclas (2021)
    Simultaneous Localization and Mapping (SLAM) research is gaining a lot of traction as the available computational power and the demand for autonomous vehicles increases. A SLAM system solves the problem of localizing itself during movement (Visual Odometry) and, at the same time, creating a 3D map of its surroundings. Both tasks can be solved on the basis of expensive and spacious hardware like LiDaRs and IMUs, but in this subarea of visual SLAM research aims at replacing those costly sensors by, ultimately, inexpensive monocular cameras. In this work I applied the current state-of-the-art in end-to-end deep learning-based SLAM to a novel dataset comprising of images recorded from cameras mounted to an indoor crane from the Konecranes CXT family. One major aspect that is unique about our proposed dataset is the camera angle that resembles a classical bird’s-eye view towards the ground. This orientation change coming alongside with a novel scene structure has a large impact on the subtask of mapping the environment, which is in this work done through monocular depth prediction. Furthermore, I will assess which properties of the given industrial environments have the biggest impact on the system’s performance to identify possible future research opportunities for improvement. The main performance impairments I examined, that are characteristic for most types of industrial premise, are non-lambertian surfaces, occlusion and texture-sparse areas alongside the ground and walls
  • Joswig, Niclas (2021)
    Simultaneous Localization and Mapping (SLAM) research is gaining a lot of traction as the available computational power and the demand for autonomous vehicles increases. A SLAM system solves the problem of localizing itself during movement (Visual Odometry) and, at the same time, creating a 3D map of its surroundings. Both tasks can be solved on the basis of expensive and spacious hardware like LiDaRs and IMUs, but in this subarea of visual SLAM research aims at replacing those costly sensors by, ultimately, inexpensive monocular cameras. In this work I applied the current state-of-the-art in end-to-end deep learning-based SLAM to a novel dataset comprising of images recorded from cameras mounted to an indoor crane from the Konecranes CXT family. One major aspect that is unique about our proposed dataset is the camera angle that resembles a classical bird’s-eye view towards the ground. This orientation change coming alongside with a novel scene structure has a large impact on the subtask of mapping the environment, which is in this work done through monocular depth prediction. Furthermore, I will assess which properties of the given industrial environments have the biggest impact on the system’s performance to identify possible future research opportunities for improvement. The main performance impairments I examined, that are characteristic for most types of industrial premise, are non-lambertian surfaces, occlusion and texture-sparse areas alongside the ground and walls.
  • Noordsij, Dennis (2015)
    Application of machine learning methods for the analysis of functional neuroimaging signals, or 'brain-function decoding', is a highly interesting approach for better understanding of human brain functions. Recently, Kauppi et al. presented a brain-function decoder based on a novel feature extraction approach using spectral LDA, which allows both high classification accuracy (the authors used sparse logistic regression) and novel neuroscientific interpretation of the MEG signals. In this thesis we evaluate the performance of their brain-function decoder with additional classification and input feature scaling methods, providing possible additional options for their spectrospatial decoding toolbox SpeDeBox. We find the performance of their brain-function decoder to validate the potential of high frequency rhythmic neural activity analysis, and find that the logistic regression classifier provides the highest classification accuracy when compared to the other methods. We did not find additional benefits in applying prior input feature scaling or reduction methods.
  • Roininen, Nelli Aurora (2016)
    Externally visual characteristics such as hair, skin and eye pigmentation or clothes have always been used for suspect identification. Also the opposite - to link unknown body parts or only DNA to a person - is recently tried to be introduced into forensics. I am testing the feasibility of one such method, IrisPlex, in Finnish population. The IrisPlex method was first published by Walsh et al. (2011). IrisPlex uses six single nucleotide polymorphisms (SNPs) in different genes found to affect the most in eye color variation. SNPs were detected from DNA sample by single base extension method, SNaPshot. Based on this information from DNA of a participant, the prediction of the most probable eye color was generated with a multinomial regression model. Also the genotypic information in the six loci and differences between Eastern and Western Finns were studied. In addition, this study supplements the knowledge of eye color frequencies across Europe. This study revealed that IrisPlex does work appropriately in Finns when detecting the blue and brown eye colors: 80% of the study participants' eye colors were predicted correctly. The biggest weakness of IrisPlex is its incapability to predict the intermediately colored eyes. Prediction probability differences between genders were not detected. In the study population 60% of the participants had blue eyes (28 individuals), 13% had brown eyes (6 individuals) and 28% (13 individuals) intermediately colored eyes. When the eyes were divided into two categories, the portion of blue-eyed participants was 77% (36 individuals) and brown-eyed participants 23% (11 individuals). These results are consistent with previous studies and update the color frequencies. Genetic segmentation of Finnish people in Eastern and Western Finland has been established in multiple studies. In addition to previous ones, this study is consistent with the genetic segmentation theory between Eastern and Western Finland. Darker eyes were observed slightly more frequently in participants with north-east heritage than in participants with south-west heritage. However, the studied populations were small and the result was insignificant. Additionally, the studied population sample represents the narrow gene pool of the Finns; almost half of the participants, 42%, reported all or 3 of their grandparents to have been born in the same village. The allele and genotype frequencies were also studied and compared to another study, in which these SNPs have been studied in the Finns and the results were consistent. Altogether, this study strengthens the evidence that IrisPlex has potential in forensic, archeological and anthropological applications even in genetically isolated populations as the Finns. This study supports the IrisPlex method to be further developed and especially addresses the need for better sensitivity for intermediate eye color.
  • Snellman, Oliver (2019)
    It has lately become a common practice among national authorities with macroeconomic mandates to build large Dynamic Stochastic General Equilibrium (DSGE) models to assist in forecasting and policy analysis. The Finnish Ministry of Finance has also developed a small open economy New Keynesian DSGE model, “KOOMA”. As DSGE models try to emulate the key features and dynamics of the economy, the crucial question is, how well do they function in accordance with reality? An answer to this question can be searched by using Structural Vector Autoregression (SVAR) models, which are natural econometric counterparts to DSGE models and are better suited for analyzing data. The aim of this study is to evaluate the calibration of KOOMA with a SVAR model, which is identified with sign restrictions. I compare impulse response functions from the SVAR model, which are found both statistically significant and robust to changes in model specifications, to the equivalent impulse response functions from KOOMA. The findings suggest, that KOOMA generally produce impulse responses with same signs as the SVAR model, but there are some differences in the magnitudes and persistence of the responses.
  • Olkkonen, Valter (2020)
    Central counterparties (CCPs) interpose themselves between the counterparties to contracts traded on financial markets, becoming the buyer to every seller and the seller to every buyer. Over the past two decades, CCPs have grown into some of the world’s most interconnected financial market infrastructures clearing financial instruments worth trillions of dollars a day. A possible default of a CCP has been described as an extremely high-impact event and notable academics have considered that a default of a CCP would require a large-scale tax payer bail-out. CCP-related financial stability concerns have grown especially in the European Union (EU) as the United Kingdom (UK) exited the EU on 31 January 2020 (Brexit). Many of the world’s leading CCPs are located in the City of London and Brexit has meant that these CCPs will be no longer authorised and supervised pursuant to the European Market Infrastructure Regulation (EU) No 648/2012 (EMIR). They are moving out of the EU’s jurisdiction and becoming “third country CCPs” from the perspective of the EU regulation. The EU confronts these growing financial stability concerns by revising EMIR with European Market Infrastructure Regulation (EU) 2019/2099 (EMIR 2.2). EMIR 2.2 has entered into force on 1 January 2020 and it introduces an unconventional approach to regulate third country CCPs. This approach includes a requirement for third country CCPs to accept direct regulation and supervision of the EU authorities despite the CCPs being located outside the EU’s jurisdiction. In addition, the regime enables the controversial “location policy”, a mechanism to compel third country CCPs to relocate into the EU. This research examines the tensions in extraterritorial regulation of systemically important CCPs and how EMIR 2.2 succeeds in its objective of reinforcing the overall stability of the Union financial system in relation to third country CCPs. The first part examines extraterritorial regulation and supervision of systemically important CCPs through the financial trilemma, a theory developed in economics and international governance, to discover the underlying tensions. The aim is to arrive at a framework that enables meaningful evaluation of EMIR 2.2. The second part evaluates whether EMIR 2.2 is capable to achieve its objective.
  • Bubolz, Jéssica (2022)
    Late blight, caused by Phytophthora infestans (Mont.) de Bary, is considered the most devastating disease in potato (Solanum tuberosum L.) production worldwide. Control methods involve mostly the use of fungicides, which are costly and are under political pressure for reduction in Europe. Potatoes from the major potato cultivar in Sweden, King Edward, previously stacked with three resistance (R) genes (RB, Rpi-blb2 and Rpi-vnt1.1) were tested in a local Swedish field, with spontaneous P. infestans infection over three seasons to evaluate the effectiveness and stability of the resistance on leaves. In addition, testing of resistance was done in both in leaves and tubers. Field results demonstrated that the 3R stacked into the cultivar King Edward, showed practically full resistance to infections of P. infestans, with no difference to fungicide use. Moreover, the resistance was effective in both leaves and tubers. The results reveal the 3R potatoes offer a functional field resistance, that could, alone, reduce the total use of fungicides in agriculture by several percent in Sweden, in an event of modifications in the EU legislation.
  • Bortolussi, Federica (2022)
    The exploration of mineral resources is a major challenge in a world that seeks sustainable energy, renewable energy, advanced engineering, and new commercial technological devices. The rapid decrease in mineral reserves shifted the focus to under-explored and low accessibility areas that led to the use of on-site portable techniques for mineral mapping purposes, such as near infrared hyperspectral image sensors. The large datasets acquired with these instruments needs data pre-processing, a series of mathematical manipulations that can be achieved using machine learning. The aim of this thesis is to improve an existing method for mineralogy mapping, by focusing on the mineral classification phase. More specifically, a spectral similarity index was utilized to support machine learning classifiers. This was introduced because of the inability of the employed classification models to recognize samples that are not part of a given database; the models always classified samples based on one of the known labels of the database. This could be a problem in hyperspectral images as the pure component found in a sample could correspond to a mineral but also to noise or artefacts due to a variety of reasons, such as baseline correction. The spectral similarity index calculates the similarity between a sample spectrum and its assigned database class spectrum; this happens through the use of a threshold that defines whether the sample belongs to a class or not. The metrics utilized in the spectral similarity index were the spectral angler mapper, the correlation coefficient and five different distances. The machine learning classifiers used to evaluate the spectral similarity index were the decision tree, k-nearest neighbor, and support vector machine. Simulated distortions were also introduced in the dataset to test the robustness of the indexes and to choose the best classifier. The spectral similarity index was assessed with a dataset of nine minerals acquired from the Geological Survey of Finland retrieved from a Specim SWIR camera. The validation of the indexes was assessed with two mine samples obtained with a VTT active hyperspectral sensor prototype. The support vector machine was chosen after the comparison between the three classifiers as it showed higher tolerance to distorted data. With the evaluation of the spectral similarity indexes, was found out that the best performances were achieved with SAM and Chebyshev distance, which maintained high stability with smaller and bigger threshold changes. The best threshold value found is the one that, in the dataset analysed, corresponded to the number of spectra available for each class. As for the validation procedure no reference was available; because of this reason, the results of the mine samples obtained with the spectral similarity index were compared with results that can be obtained through visual interpretation, which were in agreement. The method proposed can be useful to future mineral exploration as it is of great importance to correctly classify minerals found during explorations, regardless the database utilized.
  • Österman, Juuso (2019)
    Modern high energy physics describes natural phenomena in terms of quantum field theories (QFTs). The relevant calculations in QFTs aim at the evaluation of physical quantities, which often leads to the application of perturbation theory. In non-thermal theories these quantities emerge from, for example, scattering amplitudes. In high-temperature theories thermodynamical quantities, such as pressure, arise from the free energy of the system. The actual computations are often performed with Feynman diagrams, which visually illustrate multi-dimensional momentum (or coordinate) space integrals. In essence, master integrals are integral structures (within these diagrams) that can not be reduced to more concise or simpler integral representations. They are crucial in performing perturbative corrections to any system described by (any) QFT, as the diagrammatic structures reduce to linear combinations of master integrals. Traditional zero-temperature QFT relates the corresponding master integrals to multi-loop vacuum diagrams, which leads in practice to the evaluation of $d$-dimensional regularized momentum integrals. Upon transitioning to thermal field theory (TFT), the corresponding master integrals become multi-loop sum-integrals. Both the thermal and non-thermal master integral structures are explored at length, using $\overline{MS}$-scheme (Modified Minimal Subtraction) in the calculations. Throughout this thesis, a self-consistent methodology is presented for the evaluation of both types of master integrals, while limiting the calculations to one- and two-loop diagrams. However, the methods are easily generalized to more complex systems. The physical background of master integrals is introduced through a derivation of Feynman rules and diagrams for $\phi^4$ scalar field theory. Afterwards, the traditional $d$-dimensional master integral structures are considered, up to general two-loop structures with massive propagators. The evaluation strategies involve e.g. the Feynman parametrization and the Mellin-Barnes transform. The application of these results is demonstrated through the evaluation of three different diagrams appearing in the two-loop effective potential of the dimensionally reduced variant of the Standard model. The relevant thermal one-loop integral structures are introduced through the high-temperature expansion of a massive one-loop sum-integral (with a single massive propagator). The thermal multi-loop computations are predominantly considered with a methodology that decomposes the integrals into finite and infinite elements. Specifically, we demonstrate the removal of both the ultraviolet and infrared (UV and IR) divergences, and evaluate the remaining finite integral using the Fourier transform from momentum space back to coordinate space. The strategies are applied to multiple non-trivial diagrammatic structures arising from the Standard model.
  • Suomela, Samu (2021)
    Large graphs often have labels only for a subset of nodes. Node classification is a semi-supervised learning task where unlabeled nodes are assigned labels utilizing the known information of the graph. In this thesis, three node classification methods are evaluated based on two metrics: computational speed and node classification accuracy. The three methods that are evaluated are label propagation, harmonic functions with Gaussian fields, and Graph Convolutional Neural Network (GCNN). Each method is tested on five citation networks of different sizes extracted from a large scientific publication graph, MAG240M-LSC. For each graph, the task is to predict the subject areas of scientific publications, e.g., cs.LG (Machine Learning). The motivation of the experiments is to give insight on whether the methods would be suitable for automatic labeling of scientific publications. The results show that label propagation and harmonic functions with Gaussian fields reach mediocre accuracy in the node classification task, while GCNN had a low accuracy. Label propagation was computationally slow compared to the other methods, whereas harmonic functions were exceptionally fast. Training of the GCNN took a long time compared to harmonic functions, but computational speed was acceptable. However, none of the methods reached a high enough classification accuracy to be utilized in automatic labeling of scientific publications.
  • Dang, Thu Ha (2023)
    Immune checkpoint inhibitor (ICI) therapy aims to enhance the endogenous immune response against tumour cells, and it has become a potent treatment option for various types of cancers. Despite the promise of ICIs, most patients do not respond to the treatment. The primary limitation of ICI therapy is the immunosuppressive tumour microenvironment (TME), which is characterised by the lack of tumour- infiltrating cytotoxic T cells (CTLs) and the presence of immunosuppressive cells, such as tumour- associated macrophages (TAMs). A promising immunotherapeutic strategy that can promote antitumor immunity is oncolytic virus (OV) therapy. OVs can selectively replicate in and kill cancer cells, leading to the release of immunostimulatory molecules. These molecules can induce local inflammation and prime and recruit CTLs to the tumour site. In addition, OVs can also be used as a delivery platform for immunostimulatory transgenes that can further enhance the activation of anti-tumour immune response and help to overcome the immunosuppressive TME. Another strategy used to support anti-tumour immune responses and overcome immunosuppressive TME is epigenetic therapy. Epigenetic therapy can reprogram both cancer and immune cells towards a less immunosuppressive phenotype, thus helping to overcome the limitation of immune checkpoint therapy. The aim of this study was to generate a novel oncolytic adenovirus armed with epigenetic modifying transgene (EpiCRAd) to overcome the immunosuppressive TME and enhance the anti-tumour immune response. We tested its efficacy and immunogenicity in vitro and in vivo using a murine triple-negative breast cancer model. We demonstrated that EpiCRAd was able to modulate the epigenome of cancer cells without affecting viruses’ infectivity. Upon examining the potential effect of EpiCRAd on cancer cells, we observed that epigenetic regulation did not notably influence the expression of MHC class I and PD- L1 proteins, both of which play a role in the immune evasion mechanism of tumour cells. In addition, the in vivo experiments show that EpiCRAd controls tumour growth the best, especially together with an immune checkpoint inhibitor, suggesting that the virus was able to create an immune microenvironment more favourable for anti-tumour response. Interestingly, the TAM infiltration in the TME seems to reduce after treatment with EpiCRAd. Overall, the combination of epigenetic therapy with oncolytic virotherapy has shown promising results in converting immunotherapy-resistant tumours into immunotherapy-responsive tumours. Our findings provide valuable insights into the effect of EpiCRAd on cancer and immune cells. This study encourages exploring the use of epigenetic cancer remodelling and oncolytic viruses for cancer immunotherapy.
  • Ojala, Elina (2018)
    Uusien lääkkeiden sydäntoksisuus ilmenee yleensä sydämen sähköisen toiminnan häiriöinä (rytmihäiriöt) ja supistustoiminnan heikkenemisenä. Tällä hetkellä uusien lääkemolekyylien sydänturvallisuutta arvioidaan lähinnä tutkimalla yhdisteen vaikutusta hERG-ionikanavan aktiivisuuteen sekä tutkimalla molekyylin vaikutusta aktiopotentiaalin kestoon. Nykymenetelmät perustuvat yksittäisiin solumalleissa tehtäviin ionikanavatutkimuksiin ja patch clamp-mittauksiin sekä koe-eläimillä tehtäviin toksisuustutkimuksiin. Nykymenetelmät ovat työläitä ja saadut tulokset eivät aina ennusta luotettavasti lääkeaineen sydänturvallisuutta ihmisen elimistössä. Optogenetiikka on optiikkaan ja geenimuokkaukseen perustuva menetelmä, joka mahdollistaa kohdesolujen tutkimisen ja hallitsemisen valon avulla. Menetelmässä hyödynnetään valoherkkiä proteiineja, opsiineja. Kun opsiineja koodaavat geenit on siirretty kohdesoluihin, voidaan solujen toimintaa arvioida ja ohjata valolla eri aallonpituuksia hyödyntäen. Optogeneettinen menetelmä soveltuu hyvin myös sydänlihassolujen sähköisen toiminnan tutkimiseen ja sydänlihassolujen tahdistamiseen. Sydänlihassolujen sähköfysiologisilla mittauksilla voidaan havaita esimerkiksi lääkeaineiden rytmihäiriövaikutuksia. Sydänlihassolujen tahdistamismahdollisuus kokeiden aikana on tärkeää, koska osa lääkkeen rytmihäiriövaikutuksista ilmaantuu vain normaalia korkeammilla syketaajuuksilla. Tämän tutkimuksen tavoitteena oli pystyttää ja validoida uusi optogeneettinen menetelmä lääkkeiden sydäntoksisuuden arvioimiseen. Valoherkät opsiinit Optopatch ja CaViar vietiin lentivirustransduktiolla ihmisen kantasoluperäisiin (induced pluripotent stem cell, iPSC) sydänlihassoluihin. Transdusoituja sydänlihassoluja tahdistettiin sinisen laservalon avulla. Sydänlihassolujen sähköistä toimintaa arvioitiin mittaamalla aktiopotentiaaleja sekä solunsisäistä kalsiumvirtausta optogeneettisen fluoresenssi- videomikroskopian avulla. Kuvantamisdatan käsittelyyn kehitettiin Matlab-pohjainen automaattinen analyysiohjelmisto. Videomikroskopiadatasta analysoitiin ohjelmiston avulla sähköfysiologisia muuttujia, kuten sydänlihassolujen syketaajuus, aktiopotentiaalien kesto (action potential duration, APD) ja amplitudi (APA) sekä kalsiumvirtauksen kesto (calcium transient duration, CTD). Optogeneettiseen menetelmään yhdistettiin myös videomikroskopia, jonka avulla voitiin arvioida solujen supistuvuutta. Optogeneettisellä menetelmällä saatuja tuloksia verrattiin patch clamp- menetelmällä saatuihin tuloksiin. Lentivirustransduktiot eivät olleet soluille toksisia, eikä virusten todettu aiheuttavan tilastollisesti merkitsevä muutoksia sydänlihassolujen sähköisessä toiminnassa. Optogeneettisella menetelmällä rekisteröidyt sähköfysiologiset muuttujat (syketaajuus, APD, APA) eivät eronneet tilastollisesti merkitsevästi patch clamp- mittauksilla saaduista tuloksista. Tutkimustulostemme perusteella voidaan optogeneettista menetelmää pitää yhtä luotettavana kuin perinteistä patch clamp-menetelmää. Lääkeainekokeissa sydänlihassolut altistettiin E-4031:lle (hERG kaliumkanavan salpaaja). Lääkkeen annosvasteiset tutkimukset tehtiin sekä spontaanisykkeellä että 1 Hz:n ja 2 Hz:n taajuuteen tahdistetuilla rytmeillä. E-4031 pidensi pienillä pitoisuuksilla aktiopotentiaalin kestoa, jota seurasivat suurilla pitoisuuksilla havaitut varhaiset jälkipolarisaatiot (early afterdepolarization, EAD) ja lopuksi sykkeen loppuminen. Patch clamp- ja supistuvuusmittauksissa saatiin esille samanlaiset vasteet E-4031 lääkealtistuksille. Samankaltainen, mutta E-4031:a vaatimattomampi lääkeainevaste saatiin esiin, kun solut altistettiin JNJ-303:lle (IKs kaliumkanavan salpaaja). Sydänlihassolujen optogeneettinen menetelmä soveltuu korvaamaan ja täydentämään perinteistä patch clamp-menetelmää uusien lääkkeiden sydäntoksisuustutkimuksissa. Yksittäisiin ionikanavatutkimuksiin soveltuu edelleen parhaiten patch clamp-menetelmä. Nopeutensa vuoksi optogeneettinen kuvantaminen soveltuu erityisesti lääkeaineaihioiden teho- ja turvallisuusseulontaan. Ei-invasiivisena menetelmänä optogenetiikka mahdollistaa myös pitkäaikaiset lääkeainealtistuskokeet.
  • Jalonen, Milla (2020)
    There are significant inter-individual differences in the effects of drugs. These differences can be caused by, for example, other diseases, adherence to treatment, or drug-drug interactions. A drug-drug interaction can lead to an increase in the concentration of the active substance in the circulation (pharmacokinetic interactions) or a change in the effect of the drug without changes in plasma concentration (pharmacodynamic interactions). A drug-drug interaction can change the efficacy of a drug or affect the adverse drug reaction profile. The individual’s genetic background, such as diversity in drug-modifying enzymes (polymorphism), also has an effect on the efficacy and the risk for adverse drug reactions of some drugs. A pharmacogenetic test can be used to study how genetic factors affect drug treatments. The aim of this master's thesis was to examine the possibilities of personalized migraine pharmacotherapy from the perspective of pharmacogenomics and drug-drug interactions. Four online drug-drug interaction databases available in Finland were compared. Inxbase is the most widely used interaction database by physicians in Finland and it is also integrated into Finnish pharmacy systems. Other databases used in this study were the international professional database Micromedex as well as Medscape Drug Interaction Checker and Drugs.com Drug Interactions Checker. The latter two are open-access databases available for healthcare professionals and patients. Interaction searches were conducted in the selected databases between acute and prophylactic drugs used for the treatment of migraine (e.g. bisoprolol-sumatriptan). Fourteen acute and 12 prophylactic drugs were selected for this study based on the Current Care Guidelines in Finland (Käypä hoito), and the data were collected in Excel spreadsheets. The first search was completed in December 2019 and the second search in March 2020. In this study, many potential interactions were found between acute and prophylactic drugs used to treat migraine in Finland. For more than half of the drug pairs studied, a potential interaction was found in at least one of the databases. There were significant differences between the interaction databases regarding which interactions the database contains and how the severity of the interactions was classified. Of the interactions found, only 45% were found in all four databases, and each database contained interactions that were not found in the other databases. Even very serious interactions or drug pairs classified as contraindicated were not found to be consistently presented across all four databases. When selecting drug treatment for a migraine patient, potential drug-drug interactions between acute and prophylactic drugs as well as the patient's genetic background should be considered. Individualizing migraine treatment to achieve the best efficacy and to reduce the risk for adverse drug reactions is important because migraine as a disease causes a heavy burden on individuals, healthcare, and society. Pharmacogenetic tests particularly developed to help choosing migraine treatment are not yet available, but tests are available for few other indications in both public and private healthcare. The use of these tests in clinical practice will increase as physicians’ pharmacogenetic knowledge and scientific evidence on pharmacogenetic tests increase. Utilization of pharmacogenetic data requires that test results are stored in electronic health records so that they are available in the future, when changes are made to drug treatment of individuals. More studies are warranted to better understand the clinical impact of pharmacogenomics and drug-drug interactions in migraine care.
  • Paakkunainen, Jonna (2023)
    Parkinson’s disease is a progressive neurodegenerative disorder which is commonly treated with Levodopa (L-dopa) and Dopa Decarboxylase (DDC)/ Catechol-O-methyltransferase (COMT) inhibitors. The main problem with this treatment is the intestinal conversion of L-dopa to dopamine despite DDC and COMT inhibition which probably occurs by the Tyrosine Decarboxylase (TyrDC) of intestinal bacteria. This study aims to find new inhibitor molecules that would have dual inhibitory effects towards both DDC and TyrDC enzymes. Currently, available DDC inhibitors cannot inhibit the bacterial TyrDC enzyme. A recently found TyrDC inhibitor (S)-α-Fluoromethyltyrosine (AFMT) is not able to inhibit the human DDC enzyme, respectively. The dual inhibition of both decarboxylases could reduce the dosing frequency and side effects related to L-dopa. In addition, the object of this study is to produce the human DDC enzyme by DNA recombinant technique as well as develop and optimize a biochemical DDC inhibition assay to study the effect of selected small molecule compounds towards inhibition of DDC and L-dopa conversion in E. faecalis model by previously developed cell-based assay. The human DDC was successfully produced in a TB medium with a yield of 1.8 mg/mL. The Km value of DDC for L-dopa was found to be 34 μM which indicates a high affinity for L-dopa. In the optimization of the DDC inhibition assay, the sample volume of 80 μL and incubation time of 3 h with detection reagent was found to give the highest fluorometric signal with sufficient robustness. In the initial screening of test compounds, 14 % of the compounds (n=59) were classified as active towards human DDC, while 31 % of the compounds were active towards L-dopa conversion in the E. faecalis model. Of those compounds, five were having dose-dependent dual inhibitory effects, but the IC50 values of them were higher compared to either carbidopa or AFMT. The most effective compounds were 8009-2501 (IC50 37 μM in E. faecalis model and 19 % inhibition at 1000 μM towards DDC enzyme) and 8012-3386 (IC50 248 μM in E. faecalis model and 37 % inhibition at 1000 μM towards DDC enzyme). However, this study confirms the possibility to find dual decarboxylase inhibitors. By optimizing the structures as well as investigating the mechanism of action, selectivity, and structure-activity relationships of the most active compounds, it is possible to find more effective dual inhibitors in the future.
  • Sun, Yuting (2015)
    Cereal β-glucan is a water-soluble cell wall polysaccharide, which has positive health effects on humans. Oxidative Degradation of β-glucan may occur during food processing, leading to the loss in physiological functionality of β-glucan. Oxidative degradation can result in cleavages of polysaccharide chain, the formation of oxidised functional groups (e.g. carbonyls) along the chain or the release of carboxylic acids (e.g. formic acid). In the case of β-glucan, chain scission and the formation of oxidised functional groups due to hydroxyl-radical induced oxidation has been shown, but the identification of released carboxylic acids has not been done. The aim was therefore to study the oxidation pathway of β-glucan, by analysing its degradation products. The focus was the release of carboxylic acids, especially formic acid. The change in molecular structure of β-glucan after the release of formic acid was also analysed. Barley β-glucan water solutions were oxidised with H2O2 and ascorbic acid at different concentrations (5, 10, 40, 70 mM), in the presence of 1 mM FeSO4·7H2O. Samples were collected on 1, 2 and 4 days and formic acid was analysed using formic acid assay kit. To evaluate the structure of oxidised β-glucan, part of the samples underwent reduction to convert any carbonyl groups back to hydroxyl groups. The oligosaccharide composition and monosaccharide composition of samples were then analysed. Results showed that formic acid was formed in H2O2 treated β-glucan and its content was positively correlated with H2O2 concentration in the presence of Fe2+. Formic acid was also formed in ascorbic acid treated β-glucan but an obvious increase in formic acid content at increased ascorbic acid concentration was not observed. Formic acid accumulated in β-glucan solution over time. Monosaccharide composition showed that samples were mainly composed of glucose. In H2O2 treated β-glucan, however, an additional component was observed which was identified to be arabinose. Arabinose was reduced by reducing agent, indicating that arabinose was formed at the reducing end of oxidised β-glucan. The content of arabinose increased with increasing H2O2 concentration, which was concomitant with a decreasing glucose content. Arabinose content decreased from oxidation day 1 to day 4. Oxidative degradation of β-glucan is proposed to proceed progressively, with random chain scission and degradation of the reducing ends. Formic acid was released due to oxidation and arabinose was formed at the reducing end. As oxidation proceeded, we suggest that the reducing end unit was degraded stepwise to release formic acid. Formic acid is demonstrated to be the oxidation product of β-glucan for the first time. The released formic acid was well related to the degree of oxidation induced by H2O2 and Fe2+. Therefore, formic acid can be used as an indicator for the oxidation of β-glucan induced by H2O2 and Fe2+.
  • Virtanen, Ville Valtteri (2018)
    Auctions are in the core on the field of dynamic pricing. Prices alter as a function of time, by either ascending or descending, and the objective of this kind of pricing mechanism is to allocate the good for the one who is willing to pay the winning price. Northwestern University has used a pricing mechanism (Purple Pricing) combining characteristics of descending auction and dynamic pricing for several years in order to sell tickets to the university’s basketball teams’ home matches. The aim of this thesis is to examine and evaluate the functioning of this kind of mechanism. The main sources for the material and content for this literary survey can be categorized to three branches. Models and main theoretic results were provided by using related economic literature, practical model of Purple Pricing was taken as a research topic separately, and some of key facts regarding functioning of the pricing mechanism were gathered through enquiry. Main results were that Purple Pricing can be modelled by using Bellman equation and characterised as a descending auction where agents (the university and the spectators) interact with both sides having own separate maximization problems. The agents can be distinguished into seller and bidders. With certain assumptions regarding the agents, model can be solved. The solution in itself provides optimality conditions which maximize the allocation (and revenue maximization) problem of the organizer (the university). The functioning of the model has caused result concerning both ticket allocation, but also real life side effects. Besides being socially efficient in allocating the tickets, its merits have been likes of the disclosure of demand curve and abolishment of black market. Although due to its special pricing policy it was not able to reach optimum (maximum) revenue, increase in revenue did occur. Similar pricing mechanisms with context-related adjustments have potential to be used in Finnish football.