Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by discipline "none"

Sort by: Order: Results:

  • Bazaliy, Viacheslav (2019)
    This thesis provides an analysis of Growth Optimal Portfolio (GOP) in discrete time. Growth Optimal Portfolio is a portfolio optimization method that aims to maximize expected long-term growth. One of the main properties of GOP is that, as time horizon increases, it outperforms all other trading strategies almost surely. Therefore, when compared with the other common methods of portfolio construction, GOP performs well in the long-term but might provide riskier allocations in the short-term. The first half of the thesis considers GOP from a theoretical perspective. Connections to the other concepts (numeraire portfolio, arbitrage freedom) are examined and derivations of optimal properties are given. Several examples where GOP has explicit solutions are provided and sufficiency and necessity conditions for growth optimality are derived. Yet, the main focus of this thesis is on the practical aspects of GOP construction. The iterative algorithm for finding GOP weights in the case of independently log-normally distributed growth rates of underlying assets is proposed. Following that, the algorithm is extended to the case with non-diagonal covariance structure and the case with the presence of a risk-free asset on the market. Finally, it is shown how GOP can be implemented as a trading strategy on the market when underlying assets are modelled by ARMA or VAR models. The simulations with assets from the real market are provided for the time period 2014-2019. Overall, a practical step-by-step procedure for constructing GOP strategies with data from the real market is developed. Given the simplicity of the procedure and appealing properties of GOP, it can be used in practice as well as other common models such as Markowitz or Black-Litterman model for constructing portfolios.
  • Sallasmaa, Christa (2021)
    The topic of this thesis is participatory budgeting and its connection to the discussion between neoliberalism and participatory governance in the context of city development. Helsinki started its own model of participatory budgeting in 2018 and has pledged to continue the concept in the future. I examine whether Helsinki’s participatory budgeting has the potential to support the ideologies of neoliberalism or participatory governance. In practice, I am exploring the views from the city government and active members of Helsinki’s neighborhood associations. Neighborhood associations had a significant role in the original participatory budgeting of Porto Alegre. I used interview and qualitative survey to collect my data. Neoliberalism has influenced the inequality between regions and the so-called crisis of democracy. Direct involvement of citizens is seen as a solution to these problems. Neoliberalism and participation have a paradoxical relationship: they have received similar criticism. In participatory governance participation means deliberative decision-making based on exchange of knowledge, but in neoliberalism participation can be a rhetoric tool to cover up actual decision-making or a city branding technique. Porto Alegre’s original model of participatory budgeting is seen as a part of participatory governance, but many of the international models seem to be more compatible with neoliberal ideology. The city government has not reserved enough resources to the participatory budgeting. The execution was rushed and showed signs of rationalization. According to the interview and the qualitative survey, inequality between regions might be the downfall of Helsinki’s participatory model. The active members of neighborhood associations see the benefits of participation budgeting but only from the perspective of certain regions. Currently, Helsinki’s participatory budgeting works better as a branding technique than as a method of decision-making. It seems to be more compatible with neoliberalism than participatory governance.
  • Toikka, Akseli (2019)
    Urban vegetation has traditionally been mapped through traditional ways of remote sensing like laser scanning and aerial photography. However, it has been stated that the bird view examination of vegetation cannot fully represent the amount of green vegetation that the citizens observe on street level. Recent studies have raised human perspective methods like street view images and measuring of green view next to more traditional ways of mapping vegetation. Green view index states the percentage of green vegetation in street view on certain location. The purpose for this study was to create a green view dataset of Helsinki city through street view imagery and to reveal the differences between human perspective and aerial perspective in vegetation mapping. Street view imagery of Helsinki was downloaded from Google street view application interface. The spatial extent of the data was limited by the availability of street view images of summer months. Several green view maps of Helsinki were created based on the green view values calculated on the street view images. In order to understand the differences between human perspective and the aerial view, the green view values were compared with the regional land cover dataset of Helsinki trough linear regression. Areas with big differences between the datasets were examined visually through the street view imagery. Helsinki green view was also compared internationally with other cities with same kind of data available. It appealed that the green view of Helsinki was divided unequally across the city area. The lowest green view values were found in downtown, industrial areas and the business centers of the suburbs. Highest values were located at the housing suburbs. When compared with the land cover, it was found that the green view has a weak correlation with low vegetation and relatively high correlation with taller vegetation such as trees. Differences between the datasets were mainly concentrated on areas where the vegetation was not visible from the street by several reasons. Main sources of errors were the oldest street view images and the flaws in image classification caused by other green objects and shadows. Even though Helsinki has many parks and other green spaces, the greenery visible to the streets isn’t always that high. The green view dataset created in this study helps to understand the spatial distribution of street greenery and brings human perspective next to more traditional ways of mapping city vegetation. When combined with previous city greenery datasets, the green view dataset can help to build up more holistic understanding of the city greenery in Helsinki
  • Hanninen, Elsa (2020)
    Vakuutussopimusten tappion arvioiminen on tärkeää vakuutusyhtiön riskienhallinnan kannalta. Tässä työssä esitellään Hattendorffin lause vakuutussopimuksen tappion odotusarvon ja varianssin arvioimiseksi sekä sovelletaan sen tuloksia monitilaisella Markov-prosessilla mallinnettavalle henkivakuutussopimukselle. Hattendorffin lauseen nojalla ekvivalenssiperiaatteen mukaan hinnoitellun vakuutussopimuksen erillisillä aikaväleillä syntyneiden tappioiden odotusarvo on nolla, ja tappiot ovat korreloimattomia, jonka seurauksena tappion varianssi voidaan laskea erillisillä aikaväleillä muodostuneiden tappioiden varianssien summana. Työn soveltavana osana simuloidaan Markov-prosesseja sopivassa monitilaisessa mallissa mallintamaan henkivakuutussopimuksien realisaatioita. Tutkitaan, onko simuloitujen polkujen tuottamien vuosittaisten tappioiden keskiarvo lähellä nollaa, ja onko koko sopimusajan tappioiden varianssin arvo lähellä summaa vuosittaisten tappioiden variansseista. Lisäksi lasketaan simulaation asetelmalle Hattendorffin lauseen avulla teoreettiset vastineet ja verrataan näitä simuloituihin arvoihin. Vakuutussopimus pitää karkeasti sisällään kahdenlaisia maksuja: vakuutusyhtiön maksamat korvausmaksut ja vakuutetun maksamat vakuutusmaksut. Vakuutussopimuksen kassavirta on jollain aikavälillä tapahtuvien vakuutuskorvausten ja -maksujen erotuksen hetkeen nolla diskontattu arvo. Vastuuvelka on määrittelyhetken jälkeen syntyvän, määrittelyhetkeen diskontatun, kassavirran odotusarvo. Vakuutussopimuksen tappio jollain aikavälillä määritellään kyseisen aikavälin kassavirran ja vastuuvelan arvonmuutoksen summana. Kun määritellään stokastinen prosessi, joka laskee tietyllä hetkellä siihen mennessä kumuloituneet kustannukset sekä tulevan vastuuvelan nykyarvon, voidaan tappio ilmaista tämän prosessin arvonmuutoksena. Kyseinen prosessi on neliöintegroituva martingaali, jolloin Hattendorffin lauseen tulokset ovat seurausta neliöintegroituvien martingaalien arvonmuutoksen ominaisuuksista. Hattendorffin lauseen tulokset löydettiin jo 1860-luvulla, mutta martingaaliteorian hyödyntäminen on moderni lähestymistapa ongelmaan. Esittämällä monitilaisella Markov-prosessilla mallinnettavan sopimuksen kustannukset Lebesgue-Stieltjes integraalina, saadaan tappion varianssille laskukelpoiset muodot. Markov-prosessilla mallinnettavilla sopimuksille voidaan johtaa erityistapaus Hattendorffin tuloksesta, missä tappiot voidaan allokoida eri vuosien lisäksi eri tiloihin liittyviksi tappioiksi. Soveltavassa osiossa nähdään, että yksittäisinä sopimusvuosina syntyneiden tappioiden odotusarvot ovat lähellä nollaa, ja otosvarianssien summa lähestyy koko sopimusajan tappion otosvarianssia, mikä on yhtäpitävää Hattendorffin lauseen väitteiden kanssa. Simuloidut otosvarianssit eivät täysin vastaa teoreettisia vastineitaan.
  • Annala, Jaakko (2020)
    We study how higher-order gravity affects Higgs inflation in the Palatini formulation. We first review the metric and Palatini formulations in comparative manner and discuss their differences. Next cosmic inflation driven by a scalar field and inflationary observables are discussed. After this we review the Higgs inflation and compute the inflationary observables both in the metric and Palatini formulations. We then consider adding higher-order terms of the curvature to the action. We derive the equations of motion for the most general action quadratic in the curvature that does not violate parity in both the metric and Palatini formulations. Finally we present a new result. We analyse Higgs inflation in the Palatini formulation with higher-order curvature terms. We consider a simplified scenario where only terms constructed from the symmetric part of the Ricci tensor are added to the action. This implies that there are no new gravitational degrees of freedom, which makes the analysis easier. As a new result we found out that the scalar perturbation spectrum is unchanged, but the tensor perturbation spectrum is suppressed by the higher-order curvature couplings.
  • Pankkonen, Joona (2020)
    The Standard Model is one of the accurate theories that we have. It has demonstrated its success by predictions and discoveries of new particles such as the existence of gauge bosons W and Z and heaviest quarks charm, bottom and top. After discovery of the Higgs boson in 2012 Standard Model became complete in sense that all elementary particles contained in it had been observed. In this thesis I will cover the particle content and interactions of the Standard Model. Then I explain Higgs mechanism in detail. The main feature in Higgs mechanism is spontaneous symmetry breaking which is the key element for this mechanism to work. The Higgs mechanism gives rise to mass of the particles, especially gauge bosons. Higgs boson was found at the Large Hadron Collider by CMS and ATLAS experiments. In the experiments, protons were collided with high energies (8-13 TeV). This leads to production of the Higgs boson by different production channels like gluon fusion (ggF), vector boson fusion (VBF) or the Higgsstrahlung. Since the lifetime of the Higgs boson is very short, it cannot be measured directly. In the CMS experiment Higgs boson was detected via channel H → ZZ → 4l and via H → γγ. In this thesis I examine the correspondence of the Standard Model to LHC data by using signal strengths of the production and decay channels by parametrizing the interactions of fermionic and bosonic production and decay channels. Data analysis carried by least squares method gave confidence level contours that describe how well the predictions of the Standard Model correspond to LHC data
  • Berlea, Vlad Dumitru (2020)
    The nature of dark matter (DM) is one of the outstanding problems of modern physics. The existence of dark matter implies physics beyond the Standard Model (SM), as the SM doesn’t contain any viable DM candidates. Dark matter manifests itself through various cosmological and astrophysical observations of the rotational speeds of galaxies, structure formation, measurements of the Cosmic Microwave Background (CMB) and gravitational lensing of galaxy clusters. An attractive explanation of the observed dark matter density is provided by the WIMP (Weakly Interacting Massive Particle) paradigm. In the following thesis I explore this idea within the well motivated Higgs portal framework. In particular, I explore three options for dark matter composition: a scalar field and U(1) and SU(2) hidden gauge Fields. I find that the WIMP paradigm is still consistent with the data. Even though it finds itself under pressure from direct detection experiments, it is not yet in crisis. Simple and well motivated WIMP models can fit the observed DM density without violating the collider and direct DM detection constraints.
  • Gasques Rocha Pinheiro, Beatriz (2020)
    Geometric isomers are of extreme importance due to the different properties of E and Z compounds. The interconversion between these forms allow to explore a vast amount of applications since their use in perfume and food industry until the development of photoactive drugs and advanced polymers. Included in this scenario are the E/Z isomers of pepper alkaloids, whose broad range of desirable pharmacological activities, such as anti-inflammatory, antioxidant and anti-cancer effects, makes them a focus in multidisciplinary research. Black pepper contains several of these alkaloids, among which piperine is the most abundant. Its properties have been studied for many years, with highlight for its pungency and flavour, in addition to medicinal applications that date from the development of ancient Indian and Chinese medicine. Piperine and structure-related compounds undergo rapid double bond isomerization in the presence of light, equilibrating to a mixture of four geometrical isomers, due to the two conjugated double bonds present in their structures. The biological activity of these isomers differs from those of the natural abundant E/E molecules. Thus, emphasizing the importance of having reliable analytical assays for their separation, detection and quantification. The current project pursued the development of a robust HPLC assays for isomers separation for piperine and some analogues. The effort included the extraction of piperine from black pepper and its use for the synthesis of highly pure piperylin and piperlonguminine standards. Piperine extraction kinetics was also studied to optimize the extraction procedure. The standards of alkaloids were isomerized using sunlight and then HPLC separation methods on chiral stationary phases were successfully established to resolve their E/Z isomers. Isocratic runs were also developed for piperine, piperylin and piperlonguminine, with the goal of adapting these methods to LC/MS application in the future. These last separations could be accomplished within 25 minutes with critical resolutions values larger than 1.8.
  • Turtiainen, Harri (2020)
    A promising Cu-Ni-PGE containing sulphide ore deposit was discovered in 2009 by Anglo American and since the company has continued studies aiming towards utilisation of the deposit. The discovered deposit lies underneath a Natura 2000 protected mire complex, Viiankiaapa, in Sodankylä municipality in Finnish Lapland. The research and exploration activities in the area are performed with mitigation and preventing actions in order to minimize the deterioration impact to the delicate ecosystem. The more detailed understanding of the hydrogeochemistry of the mire environment in its current state can assist: in monitoring, mitigating and preventing of potential environmental effects due to future mining operations as well as planning the monitoring program. Hydrogeochemical studies, consisting of water and peat sampling at eight sampling points, were carried out along a 1.6 km long study line. Water samples were collected from the surface of the mire as well as within the peat layer and the bottom of the peat layer. Water samples were collected using a mini-piezometer. The analyses for the water samples involved: major components, trace elements and δ18O & δ2H. Groundwater influence in the different sampling points as well as different sections of the peat was investigated using the mentioned chemical and isotopic properties. Peat sampling focused on finding samples which would have different hydraulic properties in order to find the influence of peat in the hydrology in the mire. Hydraulic conductivity of peat samples was determined using rigid wall permeameter test setup. The chemical and physical methods were supplemented by a ground penetrating radar survey completed with 30 and 100 MHz antennas. Studies of peat showed that the hydraulic conductivity varies substantially even inside the rather small study area. Widely recognized correlation between hydraulic conductivity and depth was not observed statistically, but the sampling sites individually show a clear connection with depth and hydraulic conductivity. The influence of the hydraulic properties of peat on to the flow of water in the mire was observed to be significant. In cases where the hydraulic conductivity of peat was very low, water flow may be prevented altogether. This was confirmed with the use of chemical analyses. With higher hydraulic conductivity, groundwater influence was seen more or less throughout the peat profile.
  • Juvonen, Mari (2020)
    Viime vuosikymmeninä on kasvanut huoli ympäristöön joutuvien kemikaalien kasvavasta määrästä. Vesistöön kulkeutuvien sukupuolihormonien, erityisesti estrogeenien, on havaittu vaikuttavan haitallisesti kalojen ja muiden vesieliöiden kehitykseen, häiritsevän kalojen lisääntymiskykyä ja hormonitoimintaa. Jätevedenpuhdistuslaitokselle tulevat steroidihormonit ovat peräisin yhdyskuntavesistä, maataloudesta, lääketeollisuudesta ja sairaaloista. Vedenpuhdistusprosessi ei poista kaikkia steroidihormoneja ja osa hormonijäämistä kulkeutuu ympäristöön kontaminoiden pohja- ja pintavesiä sekä maaperää. Tässä työssä tarkastellaan steroidien määritystä jätevesistä nestekromatografialla ja kapillaarielektroforeesilla vuosina 2010-2020. Steroideja on tutkittu jätevesistä pääasiassa GC-MS (kaasukromatografia-massaspektrometri) ja LC-MS tai LC-MS/MS (nestekromatografia-massaspektrometri tai nestekromatografia-tandem-massaspektrometri) -menetelmillä. Kaasukromatografia soveltuu vain haihtuville ja termisesti stabiileille yhdisteille ja vaatii usein paljon näytteiden esikäsittelyä. Tästä syystä nestekromatografia on tällä hetkellä yleisin menetelmä steroidien määrittämiseen. Kapillaarielektroforeesilla (CE) tutkimuksia on tehty vielä aika vähän, mutta se on osoittautunut lupaavaksi analysointitekniikaksi steroidien tutkimisessa. CE:n etuna on korkea erotustehokkuus rakenteellisesti samankaltaisillakin yhdisteillä, kuten steroidit ja niiden metaboliitit. CE-tekniikat jaetaan alalajeihin eri erotusperiaatteiden perusteella. Misellinen sähkökineettinen kapillaarikromatografia (MEKC) perustuu pinta-aktiivisen aineen käyttöön puskuriliuoksessa. Kun pinta-aktiivisen aineen pitoisuus ylittää niin sanotun kriittisen misellikonsentraation, liuokseen syntyy misellejä. Erottuminen perustuu näiden misellien ja analyyttien vuorovaikutukseen. Osittaistäyttöisessä misellisessä sähkökineettisessä kromatografiassa (PF-MEKC) vain pieni osa kapillaarista on täytetty miselliliuoksella. MEKC -tekniikka soveltuu sekä neutraalien että varautuneiden yhdisteiden erottamiseen. Koska steroidit esiintyvät jätevesissä hyvin pieninä pitoisuuksina (ng/l), näytteet on esikonsentroitava analyysia varten. Tähän käytetään useimmiten kiinteäfaasi-uuttoa (SPE). Uusia kiinteäfaasi-uuttotekniikoita on myös otettu käyttöön. Nämä ovat usein niin sanottuja mikrouuttotekniikoita, jotka kuluttavat vähemmän liuottimia ja näytteitä.
  • Kärppä, Mai (2020)
    Arctic peatlands are globally extensive and long-lasting storages of carbon and are therefore important ecosystems controlling global carbon cycling. Changes in climate affect peatlands’ ability to accumulate carbon through changes in hydrology and water table level, vegetation, soil temperature and permafrost thaw. As climate warming is projected mostly to northern and arctic regions, it may change the peatlands’ capacity to sequester and release carbon as carbon dioxide and methane. In this Master’s Thesis I studied how the past climate changes are reflected in carbon accumulation rates over the past millennia. Known climate anomalies, such as the Medieval Climate Anomaly, Little Ice Age and the last rapid warming starting from 1980, and their impact on average long-term apparent rate of carbon accumulation were studied from the peat proxies. 15 peat cores were collected from northern subarctic Swedish Lapland and from North-East European Russia. Cores were collected from the active peat layer above permafrost that is known to be sensitive to climate warming. Cores were dated with radiocarbon (14C) and lead (210Pb) methods and peat properties and accumulation patterns were calculated for one centimeter thick subsamples based on chronologies. The Little Ice Age and the last rapid warming affected the carbon accumulation rate considerably whereas for Medieval Climate Anomaly period the peat records did not show very distinctive response. During the Little Ice Age the carbon accumulation rates were low (median 10,5 g m-2v-1) but during the post-Little Ice Age and especially during the last warm decades after 1980 carbon accumulation rates have been high (median 48,5 g m-2v-1). Medieval Climate Anomaly had only a minor positive effect on accumulation rates. On average, the long-term apparent rate of carbon accumulation during the past millennia was 43,3 g m-2v-1 which is distinctly higher than the previously studied rate of 22,9 g m-2v-1 for northern peatlands (p-value 0,0003). Based on results it can be concluded that warm climate periods accelerated the carbon accumulation rate whereas during cold periods accumulation decelerated. Warm climate prolongs the growth period and accelerates the decomposition of peat; cold climate shortens the period of plant growth and thickens the permafrost layer in peatlands, respectively. However, peat layers that are formed after the Little Ice Age are incompletely decomposed which amplifies the carbon accumulation rate partly. Nevertheless, permafrost thawing has been shown to increase accumulation rates, as well. Studying past carbon accumulation rates helps to understand the peatland and carbon cycling dynamics better. Even though accumulation rates reveal a lot about carbon sequestration capabilities of peat, it does not indicate whether a peatland has been a carbon sink or a source.
  • Seshadri, Sangita (2020)
    Blurring is a common phenomenon during image formation due to various factors like motion between the camera and the object, or atmospheric turbulence, or when the camera fails to have the object in focus, which leads to degradation in the image formation process. This leads to the pixels interacting with the neighboring ones, and the captured image is blurry as a result. This interaction with the neighboring pixels, is the 'spread' which is represented by the Point Spread Function. Image deblurring has many applications, for example in Astronomy, medical imaging, where extracting the exact image required might not be possible due to various limiting factors, and what we get is a deformed image. In such cases, it is necessary to use an apt deblurring algorithm keeping all necessary factors like performance and time in mind. This thesis analyzes the performance of learning and analytical methods in Image deblurring Algorithm. Inverse problems would be discussed at first, and how ill posed inverse problems like image deblurring cannot be tackled by naive deconvolution. This is followed by looking at the need for regularization, and how it is necessary to control the fluctuations resulting from extreme sensitivity to noise. The Image reconstruction problem has the form of a convex variational problem, and its prior knowledge acting as the inequality constraints which creates a feasible region for the optimal solution. Interior point methods iterates over and over within this feasible region. This thesis uses the iRestNet Method, which uses the Forward Backward iterative approach for the Machine learning algorithm, and Total Variation approach implemented using the FlexBox tool for analytical method, which uses the Primal Dual approach. The performance is measured using SSIM indices for a range of kernels, the SSIM map is also analyzed for comparing the deblurring efficiency.
  • Besel, Vitus (2020)
    We investigated the impact of various parameters on new particle formation rates predicted for the sulfuric acid - ammonia system using cluster distribution dynamics simulations, in our case ACDC (Atmospheric Cluster Dynamics Code). The predicted particle formation rates increase significantly if rotational symmetry number of monomers (sulfuric acid and ammonia molecules, and bisulfate and ammonium ions) are considered in the simulation. On the other hand, inclusion of the rotational symmetry number of the clusters only changes the results slightly, and only in conditions where charged clusters dominate the particle formation rate because most of the clusters stable enough to participate in new particle formation display no symmetry, therefore have a rotational symmetry number of one, and the few exceptions to this rule are positively charged. Further, we tested the influence of the application of a quasi-harmonic correction for low-frequency vibrational modes. Generally, this decreases predicted new particle formation rates, and significantly alters the shape of the formation rate curve plotted against the sulfuric acid concentration. We found that the impact of the maximum size of the clusters explicitly included in the simulations depends on the simulated conditions and the errors due to the limited set of clusters simulated generally increase with temperature, and decrease with vapor concentrations. The boundary conditions for clusters that are counted as formed particles (outgrowing clusters) have only a small influence on the results, provided that the definition is chemically reasonable and the set of simulated clusters is sufficiently large. We compared predicted particle formation rates with experimental data measured at the CLOUD (Cosmics Leaving OUtdoor Droplets) chamber. A cluster distribution dynamics model shows improved agreement with experiments when using our new input data and the proposed combination of symmetry and quasi-harmonic corrections., compared to an earlier study based on older quantum chemical data.
  • Lauha, Patrik (2021)
    Automatic bird sound recognition has been studied by computer scientists since late 1990s. Various techniques have been exploited, but no general method, that could even nearly match the performance of a human expert, has been developed yet. In this thesis, the subject is approached by reviewing alternative methods for cross-correlation as a similarity measure between two signals in template-based bird sound recognition models. Template-specific binary classification models are fit with different methods and their performance is compared. The contemplated methods are template averaging and procession before applying cross-correlation, use of texture features as additional predictors, and feature extraction through transfer learning with convolutional neural networks. It is shown that the classification performance of template-specific models can be improved by template refinement and utilizing neural networks’ ability to automatically extract relevant features from bird sound spectrograms.
  • Sokka, Iris (2019)
    Cancer is a worldwide health problem; in 2018 9.6 million people died of cancer, meaning that about 1 in 6 deaths was caused by it. The challenge with cancer drug therapy has been the development of cancer drugs that are effective against cancer but are not harmful to the healthy cells. One of the solutions to this has been antibody-drug conjugates (ADCs), where a cytotoxic drug is bound to an antibody. The antibody binds to specific antigen present on the surface of the cancer cell, thus working as a vessel to carry the drug specifically to the cancer cells. Monomethyl auristatin E (MMAE) and monomethyl auristatin F (MMAF) are mitosis preventing cancer drugs. The auristatins are pentapeptides that were developed from dolastatin 10. MMAE consist of monomethyl valine (MeVal), valine (Val), dolaisoleiune (Dil), dolaproine (Dap) and norephedrine (PPA). MMAF has otherwise similar structure, but norephedrine is replaced by phenylalanine (Phe). They prevent cell division and cancer cell proliferation by binding to microtubules and are thus able to kill any kind of cell. By attaching the auristatin to an antibody that targets cancer cells, they can effectively be used in the treatment of cancer. MMAE and MMAF exist as two conformers in solution, namely as cis- and trans-conformers. The trans-conformer resembles the biologically active conformer. It was recently noted that in solution 50-60 % of the MMAE and MMAF-molecules exist in the biologically inactive cis-conformer. The molecule changes from one conformer to the other by the rotation of an amide bond. However, this takes several hours in body temperature. As the amount of the cis-conformer is significant, the efficacy of the drug is decreased, and the possibility of side effects is increased. It is possible that the molecule leaves the cancer cell in its inactive form, migrates to healthy cells and tissue, and transforms to the active form there, damaging the healthy cell. The goal of this study was to modify the structure of the auristatins so that the cis/trans-equilibrium would change to favor the biologically active trans-conformer. The modifications were done virtually, and the relative energies were computed using high-level quantum chemical methods, at density functional theory (DFT), 2nd order perturbation theory (MP2) and coupled cluster levels. Intramolecular interactions were analyzed computationally, employing symmetry-adapted perturbation theory and the non-covalent interactions analysis. The results suggest that simple halogenation of the benzene ring para-position is able to significantly shift the cis/trans-equilibrium to favor the trans-conformer. This is due to changes in intramolecular interactions that favor the trans-conformer after halogenation. For example, the NCI analysis shows that the halogen atom invokes stabilizing intramolecular interactions with the Dil amino acid; there is no such interaction between the para-position hydrogen and Dil in the original molecules. We also performed docking studies that show that the halogenated molecules can bind to microtubules, thus confirming that the modified structures have potential to be developed into new, more efficient and safe cancer drugs. The most promising drug candidates are Cl-MMAF, F-MMAF, and F-MMAE where 94, 90, and 79 % of the molecule is predicted to exist in the biologically active trans-conformer, respectively.
  • Kemppainen, Esa (2020)
    NP-hard optimization problems can be found in various real-world settings such as scheduling, planning and data analysis. Coming up with algorithms that can efficiently solve these problems can save various rescources. Instead of developing problem domain specific algorithms we can encode a problem instance as an instance of maximum satisfiability (MaxSAT), which is an optimization extension of Boolean satisfiability (SAT). We can then solve instances resulting from this encoding using MaxSAT specific algorithms. This way we can solve instances in various different problem domains by focusing on developing algorithms to solve MaxSAT instances. Computing an optimal solution and proving optimality of the found solution can be time-consuming in real-world settings. Finding an optimal solution for problems in these settings is often not feasible. Instead we are only interested in finding a good quality solution fast. Incomplete solvers trade guaranteed optimality for better scalability. In this thesis, we study an incomplete solution approach for solving MaxSAT based on linear programming relaxation and rounding. Linear programming (LP) relaxation and rounding has been used for obtaining approximation algorithms on various NP-hard optimization problems. As such we are interested in investigating the effectiveness of this approach on MaxSAT. We describe multiple rounding heuristics that are empirically evaluated on random, crafted and industrial MaxSAT instances from yearly MaxSAT Evaluations. We compare rounding approaches against each other and to state-of-the-art incomplete solvers SATLike and Loandra. The LP relaxation based rounding approaches are not competitive in general against either SATLike or Loandra However, for some problem domains our approach manages to be competitive against SATLike and Loandra.
  • Bouri, Ioanna (2019)
    In model selection, it is necessary to select a model from a set of candidate models based on some observed data. The model should fit the data well, but without being overly complex, since that would not allow the model to generalize well its predictions to unseen data. Information criteria are widely used model selection methods that select a model based on some criteria. Information criteria estimate a score for each candidate model, and use that score to make a selection. A common way of estimating such a score, rewards the candidate model for its goodness of fit on some observed data and penalizes for the model complexity. Many popular information criteria, such as Akaike's Information Criterion (AIC) and Bayesian Information Criterion (BIC) penalize model complexity by the feature dimension. However, in a non-standard setting with inherent dependencies, these criteria are prone to over-penalizing the complexity of the model. Motivated by how these commonly used criteria tend to over-penalize, we evaluate AIC and BIC on a multi-target setting with correlated features. We compare AIC and BIC, with the Fisher Information Criterion (FIC), a criterion that takes into consideration correlations amongst features and does not penalize model complexity solely by the feature dimension of the candidate model. We evaluate the feature selection and predictive performances of the three information criteria in a linear regression setting with correlated features. We evaluate the precision, recall and F1 score of the set of features each criterion selects, compared to the feature set of the generative model. Under this setting's assumptions, we find that FIC yields the best results, compared to AIC and BIC, both in the feature selection and predictive performance evaluation. Finally, using FIC's properties for feature selection, we derive a formulation that allows to approximate the effective feature dimension of models with correlated features, in linear regression settings.
  • Lähteenmäki, Henry (2019)
    Tämän työn tarkoitus on helpottaa opettajia ja koulutusalan asiantuntijoita lähestymään Wilberin Integraaliteoriaa ja integraaliopetusta. Integraaliteoriaa viitekehyksenä käyttävä integraaliopetus on yksi varteenotettavimmista vaihtoehdoista tulevaisuuden integraaliseksi ja mahdollisimman kokonaisvaltaiseksi opetusmenetelmäksi. Teoria-osiossa käydään läpi Integraaliteorian teorian tausta ja kehitysvaiheet. Sitten esitellään Integraaliteorian tärkeimmät osapuolet: AQAL matriisi, Wilber-Combs hila ja Integraalinen Metodologinen Pluralismi. Seuraavaksi keskustellaan integraaliopetuksesta, sen tarpeesta ja tunnuspiirteistä, ja siitä miten integraaliopetuksessa tulisi huomioida Integraaliteorian tärkeimmät osapuolet. Lopuksi pohditaan opettajan roolia ja integraaliopetuksen haasteita. Työn empiirinen osa koostuu toukokuussa 2018 Meksikossa Tecnologico de Monterrey -yliopiston Tampicon kampuksella tehdystä kyselytutkimuksesta. Kysetutkimuksen tarkoituksena oli selvittää opiskelijoiden subjektiivisia asenteita, mielipiteitä ja kokemuksia integraaliopetuksesta. Vastauksille suoritettiin tilastollinen analyysi SPSS ohjelmistolla. Opiskelijat arvioiva integraaliopetuksen hyödyn korkeaksi oppimiselle ja opiskelulle. Heidän arvioiden perusteella myös integraalinen lähestymistapa elämää kohtaan ylipäätään arvioitiin hyödylliseksi. Integraaliopetuksen transformatiivinen potentiaali arvioitiin korkeaksi. Lisäksi opiskelijat pitivät integraaliopetuksen piirteitä ja tavoitteita tärkeinä opetukselle. Vertailuryhmien (sukupuoli, ikä, opintolukukaudet yliopistossa, lukion keskiarvo ja yliopiston keskiarvo) sisäisien ryhmien vastauksien tilastollista poikkeavuutta ei löytynyt. Täten, opiskelijat arviot integraaliopetuksesta olivat positiivisia riippumatta taustatekijöistä.
  • Rinta-Homi, Mikko (2020)
    Heating, ventilation, and air conditioning (HVAC) systems consume massive amounts of energy. Fortunately, by carefully controlling these systems, a significant amount of energy savings can be achieved. This requires detecting a presence or amount of people inside the building. Countless different sensors can be used for this purpose, most common being air quality sensors, passive infrared sensors, wireless devices, and cameras. A comprehensive review and comparison are done for these sensors in this thesis. Low-resolution infrared cameras in counting people are further researched in this thesis. The research is about how different infrared camera features influence counting accuracy. These features are resolution, frame rate and viewing angle. Two systems were designed: a versatile counting algorithm, and a testing system which modifies these camera features and tests the performance of the counting algorithm. The results prove that infrared cameras with resolution as low as 4x2 are as accurate as higher resolution cameras, and that frame rate above 5 frames per second does not bring any significant advantages in accuracy. Resolution of 2x2 is also sufficient in counting but requires higher frame rates. Viewing angles need to be carefully adjusted for best accuracy. In conclusion, this study proves that even the most primitive infrared cameras can be used for accurate counting. This puts infrared cameras in a new light since primitive cameras can be cheaper to manufacture. Therefore, infrared cameras used in occupancy counting become significantly more feasible and have potential for widespread adoption.
  • Kauppala, Juuso (2021)
    The rapidly increasing global energy demand has led to the necessity of finding sustainable alternatives for energy production. Fusion power is seen as a promising candidate for efficient and environmentally friendly energy production. One of the main challenges in the development of fusion power plants is finding suitable materials for the plasma-facing components in the fusion reactor. The plasma-facing components must endure extreme environments with high heat fluxes and exposure to highly energetic ions and neutral particles. So far the most promising materials for the plasma-facing components are tungsten (W) and tungsten-based alloys. A promising class of materials for the plasma-facing components is high-entropy alloys. Many high-entropy alloys have been shown to exhibit high resistance to radiation and other wanted properties for many industrial and high-energy applications. In materials research, both experimental and computational methods can be used to study the materials’ properties and characteristics. Computational methods can be either quantum mechanical calculations, that produce accurate results while being computationally extremely heavy, or more efficient atomistic simulations such as classical molecular dynamics simulations. In molecular dynamics simulations, interatomic potentials are used to describe the interactions between particles and are often analytical functions that can be fitted to the properties of the material. Instead of fixed functional forms, interatomic potentials based on machine learning methods have also been developed. One such framework is the Gaussian approximation potential, which uses Gaussian process regression to estimate the energies of the simulation system. In this thesis, the current state of fusion reactor development and the research of high-entropy alloys is presented and an overview of the interatomic potentials is given. Gaussian approximation potentials for WMoTa concentrated alloys are developed using different number of sparse training points. A detailed description of the training database is given and the potentials are validated. The developed potentials are shown to give physically reasonable results in terms of certain bulk and surface properties and could be used in atomistic simulations.