Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Hyvönen, Jere (2021)
    High-intensity and -amplitude focused ultrasound has been used to induce cavitation for decades. Well known applications are medical (lithotripsy and histotripsy) and industrial ones (particle cleaning, erosion, sonochemistry). These applications often use low frequencies (0.1-5 MHz), which limits the spatial precision of the actuation, and the chaotic nature of inertial cavitation is rarely monitored or compensated for, constituting a source of uncertainty. We demonstrate the use of high-frequency (12 MHz) high-intensity (ISPTA=90 W/cm2 ) focused-ultrasound- induced cavitation to locally remove solid material (pits with a diameter of 20 µm to 200 µm) for non- contact sampling. We demonstrate breaking cohesion (aluminium) and adhesion (thin film on a substrate, i.e. marker ink on microscope glass). The eroded surfaces were analyzed with a scanning acoustic microscope (SAM). We present the assembly and the characterization of a focused ultrasound transducer and show quantification of the effect of different sonication parameters (amplitude, cycle count, burst count, defocus) on the size and shape of the resulting erosion pits. The quantitative precision of this method is achieved by systematic calibration measurements, linking the resulting erosion to acoustic parameters to ensure repeatability (sufficient probability of cavitation), and inertial cavitation monitoring of the focal echoes. We discuss the usability of this method for localized non-contact sampling.
  • Li, Yonghao (2013)
    Nowadays, home gateways such as Wireless Access Points(APs), Cable or DSL modems are widely deployed for residential and Small Office/Home Office (SOHO) customers to access Internet services. The home gateways typically act as middleboxes performing various higher-layer functions, such as network address translation (NAT), traffic filtering or advanced application layer operations. The exact behavior of these functions are not standardized, and they are often undocumented by the home gateway vendors. However, such middleboxes are known to have undesired interactions with normal protocol operation, and they substantially hinder and complicate new protocol development and prevent new protocol extensions from working. For this reason, it is important to learn the deployed middlebox characteristics to allow network engineers design protocols that can be deployed in realistic environments that typically include middleboxes. In this thesis, the main purpose is to explore the characteristics and unknown behavior of various home gateways which are already widely deployed everywhere. In order to achieve this goal, numerous home gateways from different vendors are deployed into our home gateway test-bed and a set of well-designed software is developed to reveal the realistic behavior of these devices under various circumstances and conditions. Also, in order to simplify the experimental procedures, the home gateway test system automation is utilized to intelligently and systematically organize multiple measurements on all the devices in our testbed. Our experiments emphasize on the latest protocols such as Lightweight User Datagram Protocol (UDP-Lite), Data Congestion Control Protocol (DCCP), Stream Control Transmission and Datagram Protocol (UDP). Moreover, the maximum number of UDP bindings, UDP throughput performance and UDP broadcast experiments are included to comprehensively learn how the home gateway handles UDP traffic in practise. The experimental results indicate that NAT functionality indeed prevents new protocols such as SCTP, DCCP and UDP-Lite from operating over current home gateway devices. In addition, home gateways significantly influence the UDP throughput performance for the residential users. Moreover, the experimental results indicates that the widely deployed home gateway devices perform a variety of NAT mapping, filtering and hairpinning behavior. Hence, it is extremely crucial to understand and benchmark the NAT behavior, while the existence of middleboxes should also not be ignored by protocol designers.
  • Gunnlaugsdóttir, Eyrún Gyða (2022)
    Biological soil crust, biocrust, is a significant contributor to biogeochemical cycles through nitrogen and carbon cycling. Further, it stabilizes soil, facilitates water infiltration, and mitigates soil erosion. The global biocrust cover is believed to decrease by about 25-40% in the next 60 years due to climate change and intensification in land use. Research on biocrust in arctic and subarctic regions is limited, much of the knowledge comes from lower latitudes in arid and semiarid ecosystems. Cold-adapted biocrust might respond differently to increasing temperatures when compared with warm-adapted biocrust. Therefore, it is fundamental to research biocrust in arctic and subarctic regions when looking at how fast the climate is changing in the Northern hemisphere. Temporal variations of soil respiration in subarctic biocrust have not been studied systematically before. This research project focuses on the effects of warming on soil respiration in biocrust, on a diurnal and a seasonal scale. It also focuses on species composition changes of vascular plants in the warming experiment where warming was induced with open-top chambers (OTCs). Soil respiration, temperature, soil water content, as well as plant species composition changes were measured during three field trips that each lasted four days during the growing season of 2021. The results show that soil respiration was lower in September when compared with measurements done in June and July. The highest values of soil respiration were observed during mid-day and the lowest during evenings and nights. The temperatures of OTC plots were, on average, 1.16 °C higher than control plots, and OTC plots had significantly lower soil water content than control plots. During this research, the soil respiration increased with higher temperature but was not different between control and OTC plots during any time of day or month measured. Soil water content did not affect soil respiration significantly, while temperature did. These findings might be explained by less soil water content within warmer plots, but warmth and moisture have been shown to increase soil respiration. In other words, less soil water content might countereffect the increase of soil respiration due to warming. Some vascular plant species were more likely to be found within or outside the warming plots. Dwarf willow, Salix herbacea, decreased in cover within OTC plots. Previous research has shown that warming significantly reduces pollen shed and time of pollen shedding for S. herbacea, which might decrease its abundance within OTC plots. Alpine bistort, Bistorta vivipara, increased in cover within OTC plots compared to control plots. Warming experiments on B. vivipara have shown positive effects on reproductive parameters, which might increase its abundance within warmed OTC plots. Sheep also prefer grazing on B. vivipara. Therefore, it might have less cover in control plots, given that OTCs exclude grazing and that many sheep roam the studied site during the growing season. Vascular plant cover was greater within control plots when compared with warmed plots. Previous results at the same site after one year of warming, from summer 2019, showed more vascular plant cover within the OTC plots when compared with control plots. The results of this research might indicate that vascular plants are gradually affected by the warming and are transitioning towards a new equilibrium. The results of this research are ground for further studies on subarctic ecosystems dominated by biocrust. Many biotic and abiotic factors affect carbon cycles. For future modelling of predicted effects of climate change, having better knowledge on how subarctic ecosystems respond to warming is essential for a better understanding of the functions and feedbacks in a global context.
  • Husu, Tuomas (2020)
    System administration is a traditional and demanding profession in information technology that has gained little attention from human-computer interaction (HCI) research. System administrators operate in a highly complex environment to keep business applications running and data available and safe. In order to understand the essence of system administrators' skill, this thesis reports individual differences in 20 professional system administrators’ task performance, task solutions, verbal reports, and learning histories. A set of representative tasks were designed to measure individual differences, and structured interviews were used to collect retrospective information about system administrators’ skill acquisition and level of deliberate practice. Based on the measured performance, the participants were divided into three performance groups. A group of five system administrators stood out from the 20 participants. They completed more tasks successfully, they were faster, they predicted their success more accurately, and they expressed more confidence during performance and anticipation. Although they had extensive professional experience, the study found no relationship between duration of experience and level of expertise. The results are aligned with expert-performance research from other domains — the highest levels of performance in system administration are attained as a result of a systematic practice. This involves an investment of effort and makes the activity less enjoyable than competing activities. When studying the learning histories, the quantity and quality of the programming experience and other high-effort computer-related problem-solving activities were found to be the main differentiating factors between the 'expert' and less-accomplished participants.
  • Lahtela, Tuomo (2019)
    Tutkielmassa pyritään kertomaan lyhyt tarinanomainen esitys kaaosteoriasta. Esitys tarkastelee ja selittää kaaosteoriaan olennaisesti liittyviä käsitteitä kuten deterministisyys ja alkuarvoherkkyys. Oleellisesti tutkielma kertoo kaaosteorian isänäkin pidetyn Edward Norton Lorenzin tarinan maailman ensimmäisen kaaottisen systeemin löytymisestä ja sitä myötä alkuarvoherkkyyden käsitteen syntymisestä. Tutkielman tarkoitus on näyttää aiheeseen perehtymättömällekin lukijalle mistä kaaosteoriassa on kyse sekä miksi se on merkityksellistä. Johdantokappaleen jälkeinen luku on jaettu kolmeen osaan, jotka käsittelevät kaaosteorian asettumista tieteenhistorian jatkumoon, käsitettä deterministisyys ja tapahtumaketjua, jonka seurauksena Edward Norton Lorenz teki tieteellisen löytönsä. Kolmas luku selittää helposti lähestyttävin esimerkein käsitteen alkuarvoherkkyys, joka tunnetaan paremmin myös nimellä perhosvaikutus. Neljännessä luvussa esitellään dynaamisen systeemin käsite selittäen sen olennaisuuden maailman tapahtumia esitettäessä ja formuloitaessa matemaattisesti. Luvussa paneututaan myös matemaattisemmin käsitteeseen deterministisyys. Luku 5 tutustuttaa lukijan tutkielman tärkeimpien lukujen 6,7 ja 8 ymmärtämiseen tarvittaviin matemaattisiin menetelmiin kuten Taylorin sarjateoriaan useammassa ulottuvuudessa, Jacobiaaniin sekä linearisaatioon. Luku 6 esittelee tutkielman pääaiheen, Lorenz-systeemin, määritellen sen matemaattisesti sekä kuvaillen sen ominaisuuksia yksinkertaista sääsysteemiä mallintavana systeeminä. Luvussa käydään läpi myös Lorenz-systeemin ymmärtäminen vektorikentän käsitteen kautta ja systeemin ratkaisupolun geometrinen representaatio. Luvussa tutkitaan myös onko Lorenz-systeemi alkuarvoherkkä kaikkialla lähtöavaruudessa. Luvun lopussa näytetään myös erittäin kuvaannollisti kuinka Lorenz-systeemin käyttäytymisen ennustaminen on käytännössä mahdotonta. Luvussa 7 Lorenz-systeemi osoitetaan alkuarvoherkäksi seuraamalla systeemin aikakehitystä. Oleellisesti kahden alkuarvoiltaan miltein identtisten ratojen välistä etäisyyttä mittaavan vektorin aikakehitystä seuraamalla näytetään, että radat erkanevat toisistaan erittäin nopeasti. Viimeinen luku esittää lyhyen yleisanalyysin alkuarvoherkkien systeemien aikakehityksestä. Luku esittelee myös kaaosteoriaan olennaisesti liittyvät käsitteet attraktori ja outo attraktori. Luvun loppuun on vielä tiivistetty tutkielman otsikkoa kunnioittaen kaaoksen suppea selitys listaten kolme päätekijää kaaoksen käsitettä matemaattisesti määrittämään.
  • Tähkä, Sari (2013)
    This Master's thesis deals with the use of block copolymers in capillary electromigration techniques (literature part) and both in material chemistry and capillary electrophoresis (experimental part). Amphiphilic block copolymers are an interesting research topic due to their specific molecular structure, which consists of at least two parts with different chemical natures. The great potential of block copolymers arises from their tunability of size, shape and composition. In recent years, numerous copolymer architectures have been developed and the demand to find new materials for biomolecule separations remains high. The literature part introduces rarely used coating materials, block copolymers, in capillary electromigration techniques. The two main electromigration techniques where block copolymers have been tested are capillary electrophoresis and capillary gel electrophoresis. Block copolymers have been attached to capillary inner surface permanently and dynamically. In capillary gel electrophoresis the micellization ability of block copolymers has been already well-known for many decades, and specific studies of copolymer phases have been published. In the experimental part of this M.Sc. thesis, double-hydrophilic poly(N-methyl-2-vinylpyridinium iodide- block –ethylene oxide) diblock copolymer was used in two very different applications to emphasize the potential of block copolymers in various fields. In both studies, the hydrophilicity of ethylene oxide block and polycationic nature of vinylpyridinium block were utilized. First poly(N-methyl-2-vinylpyridinium iodide- block –ethylene oxide) was used to mediate the self-assembly of ferritin protein cages. The aim of this research was to explore complexation of double-hydrophilic diblock copolymers with protein cages and to study the molecular morphology of the formed nanoparticle/copolymer assemblies. Complexation process was studied in aqueous solvent medium and formation of complexes was investigated with dynamic light scattering. Transmission electron microscopy and small-angle x-ray scattering technique were used to characterize the size and shape of the particles. In the second approach the double-hydrophilic block copolymer was used as capillary coating material in two different capillary electromigration techniques. The possibility to alter the electro-osmotic flow and to gain a new tool for biomolecule studies was explored. Our results indicated that poly(N-methyl-2-vinylpyridinium iodide- block- ethylene oxide) binds efficiently with oppositely charged objects and surfaces via electrostatic interactions, and the polyethylene oxide block gives good stability in aqueous medium. Nanoparticle co-assembly studies showed that the poly(N-methyl-2-vinylpyridinium iodide- block- ethylene oxide) complexes were approximately 200-400 nm in diameter. For capillary coating studies, the polymer suppressed electro-osmotic flow efficiently and showed good run-to-run stability with RSD values from 1.4 to 7.9 %. Coating was observed to be very stable at pH range from 4.5 to 8.5 with ultra-low mobilities. The results achieved prove the potential of double-hydrophilic block copolymers in different various fields in the future.
  • Terkki, Eeva (2016)
    Free mobile applications (apps) available on app marketplaces are largely monetized through mobile advertising. The number of clicks received on the advertisements (ads) and thus the revenue gained from them can be increased by showing targeted ads to users. Mobile advertising networks collect a variety of privacy sensitive information about users and use it to build advertising profiles. To target ads at individual users based on their interests, these advertising profiles are typically linked with the users' unique device identifiers, such as the advertising ID used in Android. Advertising profiles may contain a large amount of privacy sensitive information about users, which can attract adversaries to attempt gaining access to this information. Mobile devices are known to leak privacy sensitive information such as device identifiers in clear text. This poses a potential privacy risk, since an adversary might exploit the leaked identifiers to learn privacy sensitive details about a victim by sampling personalized ads targeted at the victim. This thesis explores the behavior of mobile ad networks regarding data collection and ad targeting, as well as the possibility of an attack where leaked device identifiers are exploited to request ads targeted at a victim. We investigated these problems in the context of four popular Android ad libraries that support ad targeting, using a custom app and simulated user profiles designed for this purpose. Our findings indicate that it is possible to use sniffed identifiers to impersonate another user for requesting ads, and to some degree, this can result in receiving ads specific to the victim's profile. In the case of some ad networks, the lack of ad targeting causes it to be infeasible to conduct an attack to request ads targeted at the victim.
  • Gibson, Natalie (2023)
    The search for a profound connection between gravity and quantum mechanics has been a longstanding goal in theoretical physics. One such connection is known as the holographic principle, which suggests that the dynamics within a given region of spacetime can be fully described on its boundary surface. This concept led to the realization that string theory provides a lower-dimensional description that encapsulates essential aspects of spacetime. While the "AdS/CFT correspondence" exemplifies the success of this holographic theory, it was discovered soon after that the Universe has a positive cosmological constant, Λ. This immediately sparked interest in a potential correspondence centered around de Sitter (dS) space, which is also characterized by a positive cosmological constant. This thesis comprehensively explores the de Sitter/Conformal Field Theory (dS/CFT) correspondence from various perspectives, along with the unique challenges posed by the distinct nature of dS space. The original dS/CFT duality proposes that a two-dimensional Conformal Field Theory resides on the boundary of three-dimensional asymptotic dS space. However, the definition and interpretation of physical observables within the dS/CFT framework remain open questions. Therefore, the discussions in this thesis not only cover the original dS/CFT conjecture, but also extend into more recent advancements in the field. These advancements include a higher-spin dS/CFT duality, the relationship between string theory and dS space, and the intriguing proposal of an "elliptical" dS space. While the dS/CFT correspondence is still far from being well-defined, there have been extensive efforts devoted to shedding light on its intricate framework and exploring its potential applications. As the Universe may be evolving towards an approximately de Sitter phase, understanding the dS/CFT correspondence offers a unique opportunity for gaining fresh insights into the link between gravity and quantum field theory.
  • Sorkhei, Amin (2016)
    With the fast growing number of scientific papers produced every year, browsing through scientific literature can be a difficult task: formulating a precise query is not often possible if one is a novice in a given research field or different terms are often used to describe the same concept. To tackle some of these issues, we build a system based on topic models for browsing the arXiv repository. Through visualizing the relationship between keyphrases, documents and authors, the system allows the user to better explore the document search space compared to traditional systems based solely on query search. In this paper, we describe the design principles and the functionality supported by this system as well as report on a short user study.
  • Longi, Krista (2016)
    Researchers have long tried to identify factors that could explain why programming is easier for some than the others or that can be used to predict programming performance. The motivation behind most studies has been identifying students who are at risk to fail and improving passing rates on introductory courses as these have a direct impact on retention rates. Various potential factors have been identified, and these include factors related to students' background, programming behavior or psychological and cognitive characteristics. However, the results have been inconsistent. This thesis replicates some of these previous studies in a new context, and pairwise analyses of various factors and performance are performed. We have data collected from 3 different cohorts of an introductory Java programming course that contains a large number of exercises and where personal assistance is available. In addition, this thesis contributes to the topic by modeling the dependencies between several of these factors. This is done by learning a Bayesian network from the data. We will then evaluate these networks by trying to predict whether students will pass or fail the course. The focus is on factors related to students' background and psychological and cognitive characteristics. No clear predictors were identified in this study. We were able to find weak correlations between some of the factors and programming performance. However, in general, the correlations we found were smaller than in previous studies or nonexistent. In addition, finding just one optimal network that describes the domain is not straight-forward, and the classification rates obtained were poor. Thus, the results suggest that factors related to students' background and psychological and cognitive characteristics that were included in this study are not good predictors of programming performance in our context.
  • Luhtakanta, Anna (2019)
    Finding and exploring relevant information from a huge amount of available information is crucial in today’s world. The information need can be a specific and precise search or a broad exploratory search, or even something between the two. Therefore, an entity-based search engine could provide a solution for combining these two search goals. The focus in this study is to 1) study previous research articles on different approaches for entity-based information retrieval and 2) implement a system which tries to provide a solution for both information need and exploratory information search, regardless of whether the search was made by using basic free form query or query with multiple entities. It is essential to improve search engines to support different types of information need in the incessantly expanding information space.
  • Lindgren, Eveliina (2015)
    An experiment-driven approach to software product and service development is getting increasing attention as a way to channel limited resources to the efficient creation of customer value. In this approach, software functionalities are developed incrementally and validated in continuous experiments with stakeholders such as customers and users. The experiments provide factual feedback for guiding subsequent development. Although case studies on experimentation conventions in the industry exist, an understanding of the state of the practice is incomplete. Furthermore, the obstacles and success factors of continuous experimentation have been little discussed. To these ends, an interview-based qualitative survey was conducted, exploring the experimentation experiences of ten software development companies. The study found that although the principles of continuous experimentation resonated with industry practitioners, the state of the practice was not mature. In particular, experimentation was rarely systematic and continuous. Key challenges related to changing organizational culture, accelerating development cycle speed, measuring customer value and product success, and securing resources. Success factors included an empowered organizational culture and deep customer and domain knowledge. There was also a good availability of technical tools and competence to support experimentation.
  • Ahlfors, Dennis (2022)
    While the role of IT and computer science in the society is on the rise, interest in computer science education is also on the rise. Research covering study success and study paths is important for understanding both student needs and developing the educational programmes further. Using a data set covering student records from 2010 to 2020, this thesis aims to find key insights and base research in the topic of computer science study success and study paths in the University of Helsinki. Using novel visualizations and descriptive statistics this thesis builds a picture of the evolution of study paths and student success during a 10-year timeframe, providing much needed contextual information to be used as inspiration for future focused research into the phenomena discovered. The visualizations combined with statistical results show that certain student groups seem to have better study success and that there are differences in the study paths chosen by the student groups. It is also shown that the graduation rates from the Bachelor’s Programme in Computer Science are generally low, with some student groups showing higher than average graduation rates. Time from admission to graduation is longer than suggested and the sample study paths provided by the university are not generally followed, leading to the conclusion that the programme structure would need some assessment to better incorporate students with diverse academic backgrounds and differing personal study plans.
  • Barral, Oswald (2013)
    This thesis goes further in the study of implicit indicators used to infer interest in documents for information retrieval tasks. We study the behavior of two different categories of implicit indicators: fixation-derived features (number of fixations, average time of fixations, regression ratio, length of forward saccades), and physiology (pupil dilation, electrodermal activity). Based on the limited number of participants at our disposal we study how these measures react when addressing documents at three different reading rates. Most of the fixation-derived features are reported to differ significantly when reading at different speeds. Furthermore, the ability of pupil size and electrodermal activity to indicate perceived relevance is found intrinsically dependent on speed of reading. That is, when users read at comfortable reading speed, these measures are found to be able to correctly discriminate relevance judgments, but fail when increasing the addressed speed of reading. Therefore, the outcomes of this thesis strongly suggest to take into account reading speed when designing highly adaptive information retrieval systems.
  • Krause, Tina (2023)
    The use of fossil fuels is a significant contributor to greenhouse gas emissions, making the transition to zero-carbon energy systems and the improvements in energy efficiency important in climate change mitigation. The energy transition also puts the citizen in a more influential position and changes the traditional dynamics between energy producers and consumers when citizens produce their own energy. Furthermore, due to the opportunity and capacity of the demand side, the energy transition places a greater emphasis on energy planning at the local level, which requires new solutions and changes in the management of the energy system. One rising phenomenon is the potential of bottom-up developments like energy communities. Within the building sector, housing cooperatives have started to emerge as energy communities, offering a way to address the energy transition through bottom-up-driven energy actions that provide renewable energy and other benefits to the local community. This master thesis examines housing cooperatives' energy communities and their role in the energy transition. The research addresses the shared renovation project in Hepokulta, Turku, seen from the energy community perspective. Furthermore, the research highlights the importance of niche development in sustainable transition, acknowledging energy communities as socio-technical niches where development is partly embedded in renewable energy technology and partly in new practices. The concept of energy community is therefore analysed through the lens of Strategic Niche Management, which focuses on expectations, networks, and learning. This research aims to analyse how residents in Hepokulta perceive the energy community project through the niche processes and how the development of energy communities might affect urban development. Analysing the residents' perceptions provide insight into the energy community characteristics and the relationship between residents and the project development processes. Additionally, the analysis identifies matters that could be changed to improve the development. The thesis is a mixed methods research combining quantitative and qualitative data, which was collected through a survey sent to the eight housing cooperatives in Hepokulta. The research showed that residents perceive the shared project in Hepokulta as essential for the area's development. Moreover, many residents overlooked the social aspects of the development, highlighting the absence of the energy community perspective in the renovation. The findings suggest some weaknesses within the three niche processes, including the early involvement of residents and communication. Furthermore, although the residents perceived themselves as important actors and the literature emphasised the importance of the demand side in future energy systems, the research revealed that the connection between project development and the residents is still lacking. However, the analysis indicates that introducing additional actors could help the energy community develop. External assistance could, for instance, benefit the housing cooperatives by facilitating improvements in the decision-making processes, the network between actors, and the sharing of information and skills.
  • Wachirapong, Fahsinee (2023)
    The importance of topic modeling in the analysis of extensive textual data is magnified by the inefficiency of manual work due to its time-consuming nature. Data preprocessing is a critical step before feeding text data to analysis. This process ensures that irrelevant information is removed and the remaining text is suitably formatted for topic modeling. However, the absence of standard rules often leads practitioners to adopt undisclosed or poorly understood preprocessing strategies. This potentially impacts the reproducibility and comparability of research findings. This thesis examines text preprocessing, including lowercase conversion, non-alphabetic removal, stopword elimination, stemming, and lemmatization, and explores their influence on data quality, vocabulary size, and topic interpretation generated by the topic model. Additionally, the variations in text preprocessing sequences and their impact on the topic model's outcomes. Our examination spans 120 diverse preprocessing approaches on the Manifesto Project Dataset. The results underscore the substantial impact of preprocessing strategies on perplexity scores and prove the challenges in determining the optimal number of topics and interpreting final results. Importantly, our study raises awareness of data preprocessing in shaping the perceived themes and content in identified topics and proposes recommendations for researchers to consider before performing data preprocessing.
  • Heinonen, Ava (2020)
    The design of instructional material affects learning from it. Abstraction, or limiting details and presenting difficult concepts by linking them with familiar objects, can limit the burden to the working memory and make learning easier. The presence of visualizations and the level to which students can interact with them and modify them also referred to as engagement, can promote information processing. This thesis presents the results of a study using a 2x3 experimental design with abstraction level (high abstraction, low abstraction) and engagement level (no viewing, viewing, presenting) as the factors. The study consisted of two experiments with different topics: hash tables and multidimensional arrays. We analyzed the effect of these factors on instructional efficiency and learning gain, accounting for prior knowledge, and prior cognitive load. We observed that high abstraction conditions limited study cognitive load for all participants, but were particularly beneficial for participants with some prior knowledge on the topic they studied. We also observed that higher engagement levels benefit participants with no prior knowledge on the topic they studied, but not necessarily participants with some prior knowledge. Low cognitive load in the pre-test phase makes studying easier regardless of the instructional material, as does knowledge on the topic being studied. Our results indicate that the abstractions and engagement with learning materials need to be designed with the students and their knowledge levels in mind. However, further research is needed to assess the components in different abstraction levels that affect learning outcomes and why and how cognitive load in the pre-test phase affects cognitive load throughout studying and testing.
  • Korhonen, Keijo (2022)
    The variational quantum eigensolver (VQE) is one of the most promising proposals for a hybrid quantum-classical algorithm made to take advantage of near-term quantum computers. With the VQE it is possible to find ground state properties of various of molecules, a task which many classical algorithms have been developed for, but either become too inaccurate or too resource-intensive especially for so called strongly correlated problems. The advantage of the VQE comes in the ability of a quantum computer to represent a complex system with fewer so-called qubits than a classical computer would with bits, thus making the simulation of large molecules possible. One of the major bottlenecks for the VQE to become viable for simulating large molecules however, is the scaling of the number of measurements necessary to estimate expectation values of operators. Numerous solutions have been proposed including the use of adaptive informationally complete positive operator-valued measures (IC-POVMs) by García-Pérez et al. (2021). Adaptive IC-POVMs have shown to improve the precision of estimations of expectation values on quantum computers with better scaling in the number of measurements compared to existing methods. The use of these adaptive IC-POVMs in a VQE allows for more precise energy estimations and additional expectation value estimations of separate operators without any further overhead on the quantum computer. We show that this approach improves upon existing measurement schemes and adds a layer of flexibility, as IC-POVMs represent a form of generalized measurements. In addition to a naive implementation of using IC-POVMs as part of the energy estimations in the VQE, we propose techniques to reduce the number of measurements by adapting the number of measurements necessary for a given energy estimation or through the estimation of the operator variance for a Hamiltonian. We present results for simulations using the former technique, showing that we are able to reduce the number of measurements while retaining the improvement in the measurement precision obtained from IC-POVMs.
  • Silvennoinen, Meeri (2022)
    Malaria is a major cause of human mortality, morbidity, and economic loss. P. falciparum is one of six Plasmodium species that cause malaria and is widespread in sub-Saharan Africa. Many of the currently used drugs for malaria have become less effective, have adverse effects, and are highly expensive, so new ones are needed. mPPases are membrane integral pyrophosphatases that are found in the vacuolar membranes of protozoa but not in humans. These enzymes pump sodium ions and/or protons across the membrane and are crucial for parasite survival and proliferation. This makes them promising targets for new drug development. In this study we aimed to identify and characterize transient pockets in mPPases that could offer suitable ligand binding sites. P. falciparum was chosen because of its therapeutical interest, and T. maritima and V. radiata were chosen because they are test systems in compound discovery. The research was performed using molecular modelling techniques, mainly homology modelling, molecular dynamics, and docking. mPPases from three species were used to make five different systems: P. falciparum (apo closed conformation), T. maritima (apo open, open with ligand, and apo closed) and V. radiata (open with ligand). P. falciparum mPPase does not have a 3D structure available, so a homology model was built using the closest structure available from V. radiata mPPase as a template. Runs of 100 ns molecular dynamics simulations were conducted for these five systems: monomeric mPPase from P. falciparum and dimeric mPPases for the others. Two representative 3D structures for each of the five trajectories, the most dissimilar one to another, were selected for further analysis using clustering. The scrutinized 3D structures were first analyzed to identify possible binding pockets using two independent methods, SiteMap and blind docking (where no pre-determined cavity is set for docking). A second set of experiments using different scores (druggability, enclosure, exposure, …) and targeted docking were then run to characterize all the located pockets. As a result, only half of the catalytic pockets were identified. None of the transient pockets were identified in P. falciparum mPPase and all of them were located within the membrane. Docking was performed using compounds that have shown inhibiting behavior in previous studies but did not give good results in the tested structures. In the end none of the transient pockets were interesting for further study.
  • Koskinen, Anssi (2020)
    The applied mathematical field of inverse problems studies how to recover unknown function from a set of possibly incomplete and noisy observations. One example of real-life inverse problem is image destriping, which is the process of removing stripes from images. The stripe noise is a very common phenomenon in various of fields such as satellite remote sensing or in dental x-ray imaging. In this thesis we study methods to remove the stripe noise from dental x-ray images. The stripes in the images are consequence of the geometry of our measurement and the sensor. In the x-ray imaging, the x-rays are sent on certain intensity through the measurable object and then the remaining intensity is measured using the x-ray detector. The detectors used in this thesis convert the remaining x-rays directly into electrical signals, which are then measured and finally processed into an image. We notice that the gained values behave according to an exponential model and use this knowledge to transform this into a nonlinear fitting problem. We study two linearization methods and three iterative methods. We examine the performance of the correction algorithms with both simulated and real stripe images. The results of the experiments show that although some of the fitting methods give better results in the least squares sense, the exponential prior leaves some visible line artefacts. This suggests that the methods can be further improved by applying suitable regularization method. We believe that this study is a good baseline for a better correction method.