Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Rohamo, Paavo (2024)
    The fast development cycles of Web User Interfaces (UI) create challenges for test automation to keep up with the changes in web elements. Test automation may suffer from test breakages after developers update the UIs of the System Under Test (SUT). Test breakages are not defects or bugs in the SUT, but a failure in test automation code. Failing to correctly locate web element from the UI, is one of the key reasons for test breakages to occur. Prior work to gain self-healing of element locators, has been traditionally done with different algorithms and recently with the help of Large Language Models (LLMs). This thesis aims to discover how to enable self-healing locators for Robot Framework Web UI tests, are there some web element locator types that are more easily repaired than others, and which LLMs should be used for this task. An experimental study was conducted for enabling self-healing locators for Robot Framework. Custom Robot Framework library was created with Python, which was tested for eight different locator strategies raised from locator breakage taxonomy. Results show that the best performance in self-healing locators is gained by using the bigger LLMs. GPT-4 Turbo and Mistral Large showed the best performance accuracy by repairing 87,5% of the locators in the Robot Framework test cases. The worst performer was Mistral 7B Instruct which was not able to correct any locators. Using LLMs for self-healing locators in Robot Framework tests is possible. To get the best results for self-healing locators, my results suggest that practitioners should focus on LLM prompt designing, in the usage of candidate algorithm with locator version history and use the biggest LLMs available if possible.
  • Mickwitz, Valter (2022)
    Utveckling inom masspektrometri har varit en av de drivande faktorerna för de senaste decenniernas framsteg inom förståelsen av atmosfärens kemi. Den data som samlas in med hjälp av masspektrometri är en av de största tillgångarna för fortsatt utveckling av kunskapen inom detta område. Dock är analysen av denna data en långsam och arbetsdryg process, och nya metoder krävs för att göra tillgänglig all den information som finns att utnyttja inom denna data. Den här avhandlingens mål var att utveckla en algoritm för automatisk identifiering av kemiska sammansättningar ur masspektrum med begränsad resolution. Målsättningen för algoritmen är att avsevärt minska på den tid som krävs för analys av masspektrum. Algoritmen fungerar genom att välja sammansättningar som maximerar sannolikheten att observera den data som observerats ($\chi^2$-anpassning) och väljer sedan den mest kostnadseffektiva modellen. Den mest kostnadseffektiva modellen syftar på den modell som nöjaktigt kan förklara data med så få sammansättningar som möjligt. För att identifiera den mest kostnadseffektiva modellen användes en modifierad version av det Bayesiska informationskriteriet. Algoritmens funktionsprinciper vidareutvecklades utgående från resultaten som erhölls från test av algoritmen med syntetisk data. Den slutliga algoritmen testades med data som samlats in i samband med tidigare experiment. Algoritmens resultat jämfördes med resultaten för analysen som gjordes i samband med experimenten. På basen av resultaten fungerar algoritmen. De val algoritmen gör motiveras av data, och motsvarar i de flesta fall de val som en forskare gör vid motsvarande tillfällen. Således kan algoritmen i sin nuvarande form tillämpas för analys av masspektrum, och förväntas kunna förkorta den tid som krävs för att identifiera kemiska sammansättningar ur masspektrum betydligt. Dock identifierades också ett antal utvecklingsområden som förväntas förbättra algoritmens prestation ytterligare.
  • Corbin, Antoine (2015)
    Many diseases are dependant of the concentration of hormones in our body, especially for women. The risk of having this type of disease such as sex-related cancer is lower in Asian countries thanks to the high consumption of soy in these countries. Indeed, soy contains several isoflavones such as daidzein or genistein which act as phytoestrogens once metabolized in our body. It is due to the fact that they are similar to estrogens but they do not have or have a lower biological effect than estrogens compounds. (S)-dihydrodaidzein, a product of the metabolism of daidzein which is one of these isoflavones, has a stronger bioactivity than its parent compound but only 30% of the population in the world can obtain it naturally. The current research was focused on the stereoselective synthesis of dihydrodaidzein by hydrogenation using a chiral catalyst that will favor the obtention of only one enantiomer. (S)-proline was used as the chiral modifier. Two types of hydrogenation were used, transfer hydrogenation and hydrogenation under pressure, the latter being the one where we obtained the maximum of enantiomeric excess. 44% was achieved after 3 hours of hydrogenation with palladium on activated carbon as catalyst at 20 bars of pressure of dihydrogen. It is the first report of the use of (S)-proline on another compound than isophorone and we obtained higher enantiomeric excess for the same conditions. This study relies on the use of one and two-dimensional NMR, preparative HPLC-MS and chiral HPLC to identify and to obtain pure compounds and the enantiomeric excess of dihydrodaizein. Future research should focus on the kinetic study of this reaction and the elucidation of the mechanism.
  • Ikonen, Eetu (2023)
    The maximum constraint satisfaction problem (MaxCSP) is a combinatorial optimization problem in which the set of feasible solutions is expressed using decision variables and constraints on how the variables can be assigned. It can be used to represent a wide range of other combinatorial optimization problems. The maximum satisfiability problem (MaxSAT) is a restricted variant of the maximum constraint satisfaction problem with the additional restrictions that all variables must be Boolean variables, and all constraints must be logical Boolean formulas. Because of this, expressing problems using MaxSAT can be unintuitive. The known solving methods for the MaxSAT problem are more efficient than the known solving methods for MaxCSP. Therefore, it is desirable to express problems using MaxSAT. However, every MaxCSP instance that only has finite-domain variables can be encoded into an equivalent MaxSAT instance. Encoding a MaxCSP instance to a MaxSAT instance allows users to combine the strengths of both approaches by expressing problems using the more intuitive MaxCSPs but solving them using the more efficient MaxSAT solving methods. In this thesis, we overview three common MaxCSP to MaxSAT encodings, the sparse, log, and order encodings, that differ in how they encode an integer variable into a set of Boolean variables. We use correlation clustering as a practical example for comparing the encodings. We first represent correlation clustering problems using MaxCSPs, and then encode them into MaxSATs instances. State-of-the-art MaxSAT solvers are then used to solve the MaxSAT instances. We compare the encodings by measuring the time it takes to encode a MaxCSP instance into a MaxSAT instance and the time it takes to solve the MaxSAT instance. The scope of our experiments is too small to draw general conclusions but in our experiments, the log encoding was the best overall choice.
  • Honkalammi, Henri (2017)
    Propargyl or methacrylate end-functionalized polylactides are important intermediates in polymer synthesis towards their application in the biomedical field. Through these intermediates hydrophobic polylactides are post polymerized with hydrophilic monomers or coupled with preformed hydrophilic homopolymers to obtain amphiphilic copolymers that possess qualities that they wouldn’t otherwise have. These systems show a unique property: self-assembly into micellar structures that can be utilized in drug delivery applications. Polylactides (PLA) offer biocompatibility and biodegradability to its non-toxic and non-carcinogenic metabolites for these biomedical applications. Understanding the synthesis, modification and processing of PLA in this light is a cornerstone for successful development of new PLA-based biomaterials. This thesis gives an overview of polylactide end functionalization, their post polymerization and documents attempts to synthesize different end functional polylactides. The experimental part of this thesis focuses on the synthesis of propargyl and methacrylate end-functionalized PLA:s with different chain lengths and their characterization with the appropriate polymer characterization techniques.
  • Pajula, Ilari (2024)
    Combining data from visual and inertial sensors effectively reduces inherent errors in each modality, enhancing the robustness of sensor-fusion for accurate 6-DoF motion estimation over extended periods. While traditional SfM and SLAM frameworks are well established in literature and real-world applications, purely end-to-end learnable SfM and SLAM networks are still scarce. The adaptability of fully trained models in system configuration and navigation setup holds great potential for future developments in this field. This thesis introduces and assesses two novel end-to-end trainable sensor-fusion models using a supervised learning approach, tested on established navigation benchmarks and custom datasets. The first model utilizes optical flow, revealing its limitations in handling complex camera movements present in pedestrian motion. The second model addresses these shortcomings by using feature point-matching and a completely original design.
  • Moilanen, Joonas (2012)
    Kasvavan kaupungistumisen johdosta kaupunki-ilmaston tutkiminen on kasvanut selvästi viimeisten vuosikymmenien aikana. Yleisimmin tunnettu kaupungistumisen aiheuttama ilmiö on lämpösaarekeilmiö, jossa yöaikaan kaupungin ilmanlämpötila pysyttelee selvästi ympäröiviä maaseutuja korkeammalla. Tämä ja monet muut kaupungissa koetut ilmiöt ovat ihmisen aiheuttamia ja siksi on tärkeää selvittää, miten kasvava kaupungistuminen tulee jatkossa vaikuttamaan meidän jokapäiväiseen elämään, terveyteen ja ympäristöömme. Helsingin Kumpulassa sijaitseva SMEAR III (Station for Measuring Ecosystem-Atmosphere Relationships) kaupunkimittausasema sai syyskuussa 2010 uuden mittauspisteen Helsingin keskustaan Hotelli Tornin huipulle, missä mitataan mm. turbulenttista latentin ja havaittavan lämmön vuota. Tässä tutkielmassa tarkastellaan kaupunkialueen energiatasapainoa Helsingin keskustassa Tornissa tehtävien mittauksien avulla, sekä perehdytään energiatasapainon mallintamiseen yhdellä maailman pohjoisimmista kaupunkimittausasemista. Tutkielmassa käytetään mittauksia, jotka kattavat kokonaisen vuoden marraskuusta 2010 lokakuuhun 2011. Mitattuja lämmön turbulenttisia voita verrataan Surface Urban Energy and Water Balance Scheme -mallilla (SUEWS) mallinnettuihin voihin. Tarkoitus on selvittää sitä, kuinka hyvin malli toimii Helsingin keskustan hyvin tiheään rakennetulla kaupunkialueella. SUEWS:n lisäksi työssä käytetään Kormannin ja Meixnerin lähdealuemallia, jolla saadaan lasketuksi arvio mitattujen voiden lähdealueista. Saatujen tuloksien mukaan Helsingin keskustassa havaittavan lämmön vuo on monta kertaa latentin lämmön vuota suurempaa. Mallinnettaessa energiatasapainon komponentteja SUEWS-mallilla havaittavan lämmön vuon arvot olivat melko lähellä mitattuja arvoja, mutta latentin lämmön vuon tapauksessa mallinnetut arvot olivat liian pieniä. Ongelmia aiheutti saatavilla olevan energian ja kosteuden liian vähäinen määrä, jotka johtuivat mm. mallinnuksessa käytetystä liian suuresta albedosta ja maankäytön aineiston karkeasta resoluutiosta. Antropogeenisen lämmön vuon määrä Helsingissä oli mallinnuksen mukaan noin 20-50 W m-2:llä, joka vastaa lauhkean vyöhykkeen kaupunkien keskiarvoja. Varastotermin käyttäytyminen oli oikeanlaista, jossa päivällä energiaa sitoutuu rakenteisiin ja yöllä sitä vapautuu ilmaan turbulenttisina voina. Yhteenlasketut energian lähdetermit eivät kuitenkaan olleet tarpeeksi isoja, joka kävi hyvin selväksi energiatasapainosuhteen avulla tehdystä vertailusta, jossa suhteeksi saatiin 1,26.
  • Poteri, Juho (2020)
    The Internet of Things (IoT) paradigm is seeing rapid adoption across multiple domains—industry, enterprise, agriculture, smart cities, households, only to name a few. IoT applications often require wireless autonomy, thereby placing challenging requirements on communication techniques and power supply methods. Wireless networking using devices with constrained energy, as often is the case in wireless sensor networks (WSN), provokes explicit considerations around the conservation of the supplied power on the one hand and the efficiency of the power drawn and energy used on the other. As radio communications characteristically consume the bulk of all energy in wireless IoT systems, this constrained energy budget combined with aspirations for terminal device lifetime sets requirements for the communications protocols and techniques used. This thesis examines two open architecture low-power wide-area network (LPWAN) standards with mesh networking support, along with their energy consumption profile in the context of power-constrained wireless sensor networks. The introductory section is followed by an overview of IoT and WSN foundations and technologies. The following section describes the IEEE 802.15.4 standard and ecosystem, followed by the Bluetooth LE and Bluetooth Mesh standards. A discussion on these standards' characteristics, behavior, and applicability to power-constrained sensor networks is presented.
  • Tafsir, Miraj Hasnaine (2013)
    Energy efficiency is one of the key factors impacting the green behavior and operational expenses of telecommunication core network operations. This thesis study is aimed for finding out possible technique to reduce energy consumption in telecommunication infrastructure nodes. The study concentrates on traffic management operation (e.g. media stream control, ATM adaptation) within network processors [LeJ03], categorized as control plane. The control plane of the telecommunication infrastructure node is a custom built high performance cluster which consists of multiple GPPs (General Purpose Processor) interconnected by high-speed and low-latency network. Due to application configurations in particular GPP unit and redundancy issues, energy usage is not optimal. In this thesis, our approach is to gain elastic capacity within the control plane cluster to reduce power consumption. This scales down and wakes up certain GPP units depending on traffic load situations. For elasticity, our study moves toward the virtual machine (VM) migration technique in the control plane cluster through system virtualization. The traffic load situation triggers VM migration on demand. Virtual machine live migration brings the benefit of enhanced performance and resiliency of the control plane cluster. We compare the state-of-the-art power aware computing resource scheduling in cluster-based nodes with VM migration technique. Our research does not propose any change in data plane architecture as we are mainly concentrating on the control plane. This study shows, VM migration can be an efficient approach to significantly reduce energy consumption in control plane of cluster-based telecommunication infrastructure nodes without interrupting performance/throughput, while guaranteeing full connectivity and maximum link utilization.
  • Saarikoski, Kasperi (2016)
    Network-intensive smartphone applications are becoming increasingly popular. Examples of such trending applications are social applications like Facebook that rely on always-on connectivity as well as multimedia streaming applications like YouTube. While the computing power of smartphones is constantly growing, the capacity of smartphone batteries is lagging behind. This imbalance has created an imperative for energy-efficient smartphone applications. One approach to increase the energy efficiency of smartphone applications is to optimize their network connections via traffic shaping. Many existing proposals for shaping smartphone network traffic depend on modifications on the smartphone OS, applications, or both. However, most modern smartphone OSes support establishing Virtual Private Networks (VPNs) from user-space applications. Our novel approach to traffic shaping takes advantage of this. We modified OpenVPN tunneling software to perform traffic shaping by altering TCP flow control on tunneled packets. Subjecting heterogenous network connections to traffic shaping without insight into traffic patterns causes serious problems to certain applications. One example of such applications are multimedia streaming applications. We developed a traffic identification feature which creates a mapping between Android applications and their network connections. We leverage this feature to selectively opt-out of shaping network traffic sensitive to traffic shaping. We demonstrate this by selectively shaping background traffic in the presence of multimedia traffic. The purpose of the developed traffic shaper is to enhance the energy efficiency of smartphone applications. We evaluate the traffic shaper by collecting network traffic traces and assessing them with an RRC simulator. The four performed experiments cover multimedia streaming traffic, simulated background traffic and concurrent multimedia and background traffic produced by simulation applications. We are able to enhance the energy efficiency of network transmissions across all experiments.
  • Shen, Cenyu (2013)
    The performance of today's smartphones is heavily limited by their batteries. To meet the increasing demands from end-users on smartphones with better battery life, Android mobile manufacturers are hunting for new approaches to improve their phones' batteries. For example, they use smart battery interfaces on their existing batteries, and they also render more usable power-saving features and settings that mobile users can use to minimize the power consumption of smartphones. But unfortunately, users always have inadequate knowledge about power characteristics and features of their mobile devices. This highlights the necessity to let users know how they should deal with limited battery lifetime and thus perform the optimal and effective energy management. Especially when the battery gets critically low, users should have a good understanding of how to better manage the remaining battery and take appropriate measures to make their phones last longer for at least keeping basic email and call communication. The objective of this thesis is to provide Android mobile users with a series of control points on maximizing the lifetime of their low-battery smartphones when they desire to maintain basic communication for emailing and calling. To achieve this goal, a detailed analysis will be performed on an Android smartphone, Samsung Google Nexus S. In the beginning, I will introduce an overview of energy profiling on the Android platform, where Carat will be presented as a practical example of energy profilers aiming for detecting energy bugs on users' smartphones and providing them with diagnosis reports to reduce the power consumed by their phones. After that, I attempt to find out how general hardware subsystems (eg. screen backlight and internet connections) and background application processes will impact battery life of smartphones by undertaking physical power measurements of the tested phone under a wide range of realistic usage scenarios. At last, I will evaluate the experimental results from users' perspectives to figure out under which circumstances battery life of smartphones will be maximized, and then based on the obtained findings I will design actionable recommendations for users on battery management. From this study, I have found that screen backlight, Internet connections and background application processes indeed impact battery life when users are emailing and calling through low-battery smartphones. In addition, to improve the lifespan of smartphones used for emailing, users should connect to WiFi, adjust screen backlight to the minimum level and stop running those application processes in the background, which have been identified by Carat as energy hogs or bugs. Moreover, if users want to have Internet calls through the Skype, they are recommended to choose WiFi connections and kill background application processes. At the same time, if Dialer is used for calling, users should disable any Internet connections to reduce the power consumption of their smartphones.
  • Zaka, Ayesha (2021)
    X-ray absorption spectroscopy(XAS) measures the absorption response of the system as a function of incident X-ray photon energy. XAS can be a great tool for material characterization due to its ability to reveal characteristic information specific to chemical state of element by using the core level electrons as a probe for empty electronic states just above the Fermi level of the material (XANES) or for the neighboring atoms (EXAFS). For years, the highly brilliant synchrotron light sources remained the center of attention for these XAS experiments, but the increasing competition for available beamtime at these facilities led to an increased interest in laboratory scale X-ray spectroscopy instruments. However, the energy resolution of laboratory scale instruments still remains sometimes limited as compared to their synchrotron counterparts. When operating at low Bragg angles, the finite source size can greatly reduce the energy resolution by introducing the effects of dispersion in the beam focus at the detector. One method to overcome this loss in resolution can be to use a position sensitive detector and use the 'pixel compensation correction' method in the post-processing of the experimental data. The main focus of this study was to improve the energy resolution of a wavelength dispersive laboratory-scale X-ray absorption spectrometer installed at the University of Helsinki Center for X-ray Spectroscopy. The project focuses on the case of Fe K-absorption edge at 7.112 keV energy and a Bragg angle of 71.74 degrees when using Silicon (5 3 1) monochromator crystal. Our results showed that the data that had been corrected using this method showed sharper spectral features with reduced effects of broadening. Moreover, contribution of other geometrical factors to the energy resolution of this laboratory X-ray spectrometer were also estimated using ray-tracing simulation and an expected improvement in resolution due to pixel compensation correction was calculated. The same technique can be extended to other X-ray absorption edges where a combination of a large deviation of Bragg angle from 90 degrees and a large source size contributes a dominant factor to the energy resolution of the instrument.
  • Niskanen, Andreas (2017)
    Computational aspects of argumentation are a central research topic of modern artificial intelligence. A core formal model for argumentation, where the inner structure of arguments is abstracted away, was provided by Dung in the form of abstract argumentation frameworks (AFs). AFs are syntactically directed graphs with the nodes representing arguments and edges representing attacks between them. Given the AF, sets of jointly acceptable arguments or extensions are defined via different semantics. The computational complexity and algorithmic solutions to so-called static problems, such as the enumeration of extensions, is a well-studied topic. Since argumentation is a dynamic process, understanding the dynamic aspects of AFs is also important. However, computational aspects of dynamic problems have not been studied thoroughly. This work concentrates on different forms of enforcement, which is a core dynamic problem in the area of abstract argumentation. In this case, given an AF, one wants to modify it by adding and removing attacks in a way that a given set of arguments becomes an extension (extension enforcement) or that given arguments are credulously or skeptically accepted (status enforcement). In this thesis, the enforcement problem is viewed as a constrained optimization task where the change to the attack structure is minimized. The computational complexity of the extension and status enforcement problems is analyzed, showing that they are in the general case NP-hard optimization problems. Motivated by this, algorithms are presented based on the Boolean optimization paradigm of maximum satisfiability (MaxSAT) for the NP-complete variants, and counterexample-guided abstraction refinement (CEGAR) procedures, where an interplay between MaxSAT and Boolean satisfiability (SAT) solvers is utilized, for problems beyond NP. The algorithms are implemented in the open source software system Pakota, which is empirically evaluated on randomly generated enforcement instances.
  • Vidjeskog, Martin (2022)
    The traditional way of computing the Burrows-Wheeler transform (BWT) has been to first build a suffix array, and then use this newly formed array to obtain the BWT. While this approach runs in linear time, the space requirement is far from optimal. When the length of the input string increases, the required working space quickly becomes too large for normal computers to handle. To overcome this issue, researchers have proposed many different types of algorithms for building the BWT. In 2009, Daisuke Okanohara and Kunihiko Sadakane presented a new linear time algorithm for BWT construction. This algorithm is relatively fast and requires far less working space than the traditional way of computing the BWT. It is based on a technique called induced sorting and can be seen as a state-of-the-art approach for internal memory BWT construction. However, a proper exploration of how to implement the algorithm efficiently has not been undertaken. One 32-bit implementation of the algorithm is known to exist, but due to the limitations of 32-bit programs, it can only be used for input strings under the size of 4 GB. This thesis introduces the algorithm from Okanohara and Sadakane and implements a 64-bit version of it. The implemented algorithm can in theory support input strings that are thousands of gigabytes in size. In addition to the explanation of the algorithm, the time and space requirements of the 64-bit implementation are compared to some other fast BWT algorithms.
  • Piipponen, Kaiu (2017)
    Geothermal energy is a growing industry and with Enhanced Geothermal System (EGS) technology it is possible to utilize geothermal energy in low heat flow areas. The ongoing EGS project in Southern Finland provides a great opportunity to learn and explore EGS technologies in a complex environment: hard crystalline rock, high pressure and low hydraulic permeability. This work describes physics behind an EGS plant, as well as basic concept of EGS, give examples of some existing plants and make calculations of how much power a plant in Finland can produce. In order to plan and build a successful plant, suitable parameters for the system are determined by modelling. The modelling is done analytically and numerically. Physical properties governing the EGS models are conductive and convective heat transfer and rock hydraulic properties that allow fluid flow. Hydraulic permeability is discussed in detail, because it is the key parameter in EGS: rock is stimulated in order to enhance permeability in order to make fluid flow possible through interconnected fractures. It is a spatially correlated parameter and it is distributed lognormally making fluid flow highly channelled. Modelling of heat and mass transfer aims to parametrize an EGS plant in the conditions of Southern Finland. The parameters governing heat transfer with fluid flowing in the geothermal reservoir are size of the reservoir and fluid velocity, which depends on matrix permeability. The larger the reservoir the more hot contact area fluid encounters and the better it heats up, the slower the flow, the longer time fluid stays in the reservoir and therefore heats up more. High flow rates cool the reservoir rapidly. However, a large reservoir is difficult to achieve, maintaining enhanced permeability requires relatively high fluid flow rates and the higher the flow rate, the more power the plant produces, so slow flow is not economically feasible. Analytical models are done with Matlab and numerical models are done with finite-element software COMSOL Multiphysics. Numerical models benchmark the analytical solutions and use spatially correlated permeability to modify fluid flow pattern and see how temperature in the reservoir changes with changes in fluid flow. The results show that creating large reservoir that could operate for 20 years with desired power production is unrealistic. Total output fluid flow required to produce over 1 MW of power is 10 kg/s. At such rate there is a risk that the reservoir cools and output fluid temperature is not sufficient for power production. In case of heterogeneous permeability connectivity of the reservoir is not as good as in case of homogeneous permeability and there is a risk that total fluid flow in the reservoir is slower and therefore less power produced.
  • Rajala, Taneli (2014)
    Enhancement of polymer electrolyte fuel cell tolerance to CO impurities would allow the use of lower quality hydrogen, thus reducing the costs without compromising fuel cell performance. In this work, the effect of carbon monoxide is mitigated by combining different methods, including air bleed, varying the anode flow rate and using a state-of-the-art Pt-Ru catalyst at two operating temperatures. The tolerance was investigated by feeding a novel arrangement of segmented cells with hydrogen containing carbon monoxide less than 20 ppm. Anode exhaust gas was constantly analysed using a gas chromatograph. It was discovered that increasing the volumetric flow rate of hydrogen and especially utilising ruthenium in the catalyst enhance the carbon monoxide tolerance. When applying the air bleed, an oxygen/CO molar ratio of at least 117 was required to stop the poisoning with a platinum catalyst. Approximately a fifth of the air bleed needed with platinum was enough with Pt-Ru. The results also suggest that when applying air bleed at elevated temperatures, it is beneficial to lower the cell temperature for the duration of the air bleed.
  • Liu, Yuxuan Jr (2023)
    Accurate interpretation of high-resolution molecule spectral data is important for scientific research such as atmospheric chemistry. This master thesis describes the development of a user-friendly visualization tool to improve the accessibility and interpretation of spectral data in the HITRAN database. By using Tkinker, a Python interface for creating graphical user interfaces (GUI), the spectral simulation tool simplifies the data visualization and analysis of molecular spectra. Users can plot line intensities and absorption spectra of various molecular species and isotopes, and adjust parameters such as wavelength, wavenumber, frequency, temperature, pressure, length, and volume mixing ratio (VMR). The GUI allows for the selection of linear or logarithmic scales to improve the clarity and depth of the spectral analysis. The GUI not only provides a practical application for the visualization of complex spectral data, but also contributes to expanding the accessibility of the HITRAN database, making it more accessible to researchers, professionals, and students in related fields. In conclusion, the thesis describes the related background and theory of the tool, the technical implementation of the GUI and its validation, a case study on NH3 measurements, and the potential for future improvements.
  • Zhou, Xinyuan (2024)
    This thesis explores the integration of Augmented Reality (AR) into social media platforms, taking Snapchat's AR as a case study. Design science research methodology is used as a research method. The primary question addressed is how augmented reality enhances user interaction and engagement in social media. The second research question is about challenges and considerations in integrating AR into social media platforms. An artifact including a series of AR features is developed by utilizing Lens Studio. User interaction features such as 3D object manipulation, distance-opacity mapping and camera interaction transformations are expected to bring immersive AR experience to users. The combination of AR and cloud-based technologies for location-based AR, data management and multi-user scenarios is also discussed. A structured experiment is conducted to evaluate the effectiveness of AR features in enhancing user engagement. The developed AR features are distributed to participants to experience via Snapchat QR codes then they provid feedback through a detailed questionnaire. The evaluation focused on metrics such as time spent, interaction frequency, depth of interaction, and technical performance, revealing significant insights into both user engagement and technical challenges. The findings confirm that AR significantly increases user engagement, with a majority of participants willing to spend more time and interact more frequently with Snapchat due to the AR features. However, technical challenges like battery drain and response time were highlighted. The thesis concludes that while AR has great potential to enhance social media experiences, ongoing improvements in technical infrastructure are essential to fully realize this potential. Future research should explore the long-term impacts of AR and its scalability across different platforms.
  • Särs, Pontus (2019)
    Denna avhandling ger en introduktion till fraktal geometri utgående från klassiska exempel. Avsikten är att ge en mångsidig inblick i vad fraktaler är och vilka nya geometriska begrepp som behövs för att beskriva dem. I brist på en rigorös definition beskrivs fraktaler utgående från deras typiska egenskaper, såsom exakt eller ungefärlig självlikhet och detaljerad struktur på varje nivå. Enklare uttryckt kommer ingen godtyckligt liten del av en fraktal figur att likna en linje och varje godtyckligt liten del av fraktalen innehåller delar som är exakta eller ungefärliga miniatyrer av hela mängden. Begreppet fraktal myntades av Benoit Mandelbrot år 1975 och blev populärt i samband med möjligheten att rita bilder av dem med datorer. Mycket av den matematik som används inom fraktal geometri utvecklades däremot under början av 1900-talet av Felix Hausdorff och Gaston Julia med flera. Två av de grundläggande begreppen i fraktal geometri är Hausdorff- och Minkowskidimension, vilka är generaliseringar av begreppet dimension. För fraktaler är dessa i allmänhet inte heltal. Den här introduktionen till fraktal geometri omskriver deras definitioner och egenskaper samt olika metoder för att beräkna dem för ett antal variarande exempel. Avhandlingen behandlar också den rigorösa konstruktionen av itererande funktionssystem (IFS) vars attraktorer oftast är fraktaler. IFS ger en metod för att systematiskt konstruera och undersöka en stor mängd fraktaler. Avhandlingen beskriver också hur fraktaler uppstår i mycket olika kontexter såsom talteori, fraktal interpolation av data och komplex dynamik. Gemensamt för alla fraktaler är den roll som oändlig rekursion eller oändliga iterationer spelar i fraktalernas konstruktion. Fraktalers koppling till kaos behandlas också ytligt. Avslutningsvis diskuteras vilken koppling fraktal geometri har till naturen och vilken nytta vi eventuellt kan ha av området. Avhandlingen är ämnad för läsare med olika nivå av matematiska förkunskaper och innehåller både lättare och svårare koncept.
  • Hiekkavuo, Aino (2015)
    This thesis explores territorial stigma in the Helsinki capital region from residents' perspective. The aim of the study is to find out, whether there are stigmatised neighbourhoods in the region. Another purpose is to analyse what kind of neighbourhoods are stigmatised and what kind of people experience the stigma. In this study people are considered to experience the stigma if they are unwilling to tell where they live. The analysis focuses on the socioeconomic structure of the neighbourhoods and the socioeconomic status and the cultural orientation of the residents. The primary research data is a survey about the wellbeing of residents in the Helsinki metropolitan region conducted in 2012. The experience of the territorial stigma is determined based on the agreement with a statement 'I don't like telling where I live'. Statistical data about the structure of neighbourhoods is provided by Statistics Finland. The study is quantitative, and the main research methods include descriptive analysis, comparison of means and factor analysis. In addition, GIS methods are used to combine and visualise the data. The results show that territorial stigma is an existing phenomenon in the Helsinki capital region. There are mainly two types of stigmatised neighbourhoods: areas with either a very low or high socioeconomic profile. However, the stigma is not very strong since even in the most stigmatised neighbourhoods only a clear minority experiences it. On the individual level the stigma does not seem to be related to respondents' socioeconomic status but rather to their cultural orientation. The respondents who don't like telling where they live find challenges, success and personal development less important than the other respondents. Not all low and high-profile neighbourhoods are stigmatised, however. It seems that the stigma is a problem mainly in those neighbourhoods that have a significantly bad or an elite reputation. In the light of international research the stigmatisation of low-profile neighbourhoods is not a surprise. What makes the Helsinki region an interesting and special case is the stigma attached to living in elite neighbourhoods. The reason for this phenomenon might lie in the 'Finnish mentality' that stresses normality and modesty. This study focuses solely on the existence of the territorial stigma. Previous international research shows that living in a stigmatised neighbourhood may have a negative impact on many aspects of life including social relationships and employment. Therefore it would be important to study the consequences of the stigma and the possible ways to prevent them in the Helsinki capital region as well.