Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by master's degree program "Magisterprogrammet i datavetenskap"

Sort by: Order: Results:

  • Hiillos, Nicolas (2023)
    This master's thesis describes the development and validation of a uniform control interface for drawing robots with ROS2. The robot control software was tasked with taking SVG images as input and producing them as drawings with three different robots. These robots are the Evil Mad Scientist AxiDraw V3/A3, UFACTORY xArm Lite6, and virtual xArm Lite6. The intended use case for the robots and companion control software is experiments studying human perception of the creativity of the drawing robots. The control software was implemented over the course of a little over six months and used a combination of C++ and Python. The design of the software utilizes ROS2 abstractions such as nodes and topics to combine different components of the software. The control software is validated against the given requirements and found to fulfil the main objectives of the project. The most important of these are that the robots successfully draw SVG images, that they do so in a similar time frame, and that these images look very similar. Drawing similarity was tested by scanning images, aligning them using using minimal error, and then comparing them visually after overlaying the images. Comparing aligned images was useful in detecting subtle differences in the drawing similarity of the robots and was used to discover issues with the robot control software. MSE and SSIM were also calculated for a set of these aligned images, allowing for the effect of future changes made to the robot control software to be quantitatively evaluated. Drawing time for the robots was evaluated by measuring the time taken for drawing a set of images. This testing showed that the Axidraw's velocity and acceleration needed to be reduced by 56% so that the xArm Lite6 could draw in similar time.
  • Hertweck, Corinna (2020)
    In this work, we seek robust methods for designing affirmative action policies for university admissions. Specifically, we study university admissions under a real centralized system that uses grades and standardized test scores to match applicants to university programs. For the purposes of affirmative action, we consider policies that assign bonus points to applicants from underrepresented groups with the goal of preventing large gaps in admission rates across groups, while ensuring that the admitted students are for the most part those with the highest scores. Since such policies have to be announced before the start of the application period, there is uncertainty about which students will apply to which programs. This poses a difficult challenge for policy-makers. Hence, we introduce a strategy to design policies for the upcoming round of applications that can either address a single or multiple demographic groups. Our strategy is based on application data from previous years and a predictive model trained on this data. By comparing this predictive strategy to simpler strategies based only on application data from, e.g., the previous year, we show that the predictive strategy is generally more conservative in its policy suggestions. As a result, policies suggested by the predictive strategy lead to more robust effects and fewer cases where the gap in admission rates is inadvertently increased through the suggested policy intervention. Our findings imply that universities can employ predictive methods to increase the reliability of the effects expected from the implementation of an affirmative action policy.
  • Koskinen, Jan (2024)
    Machine Learning Operations (MLOps) emerged as a practice for applying DevOps practices and culture for machine learning (ML) systems to increase the speed and reliability of deployments. These practices include advocating for automation and monitoring at all steps of the ML system construction, including integration, testing, deployment, and infrastructure management. In addition to continuous integration (CI) and continuous delivery (CD), MLOps introduces continuous training (CT), which is unique to ML systems and is concerned with automatically training and serving ML models. Operating ML systems in production requires continuously adapting to the evolving input data. This is especially evident in time series data, which can experience frequent drifts. Moreover, implementing CT in practice is challenging and heavily dependent on the task and available data. Depending on the complexity of the model and the amount of data, the training process can be computationally costly. Using a scheduled interval for retraining is inefficient if the model still performs adequately. We designed an ML pipeline capable of efficient continuous training using an error-based trigger for retraining the model. The ML pipeline is designed for a time series forecasting task, where the data is prone to frequent drifts. We applied the design science research methodology to identify the problem, design and develop a solution artifact, and evaluate its utility and efficacy. The resulting solution utilizes an open-source MLOps platform that runs on Kubernetes. The solution includes a custom retrainer component to enable CT. We demonstrated the efficacy of the solution using real energy demand data from a university property in Finland. Our evaluation shows that the system is capable of efficient continuous training.
  • Laaja, Oskari (2022)
    Mobile applications have become common and end-users expect to be able to use either of the major platforms: iOS or Android. The expectation of finding the application in their respected platform stores is strongly present. The process of publishing mobile applications into these application stores can be cumbersome. The frequency of mobile application updates can be damaged by the heaviness of the process, reducing the end-user satisfaction. As manually completed processes are prone to human errors, the robustness of the process decreases and the quality of the application may diminish. This thesis presents an automated pipeline to complete the process of publishing cross-platform mobile application into App Store and Play Store. The goal of this pipeline is to make the process faster to complete, more robust and more accessible to people without technical knowhow. The work was done with design science methodology. As results, two artifacts are generated from this thesis: a model of a pipeline design to improve the process and implementation of said model to functionally prove the possibility of the design. The design is evaluated against requirements set by the company for which the implementation was done. As a result, the process used in the project at which the implementation was taken into use got faster, simpler and became possible for non-development personnel to use.
  • Ikkala, Tapio (2020)
    This thesis presents a scalable method for identifying anomalous periods of non-activity in short periodic event sequences. The method is tested with real world point-of-sale (POS) data from grocery retail setting. However, the method can be applied also to other problem domains which produce similar sequential data. The proposed method models the underlying event sequence as a non-homogeneous Poisson process with a piecewise constant rate function. The rate function for the piecewise homogeneous Poisson process can be estimated with a change point detection algorithm that minimises a cost function consisting of the negative Poisson log-likelihood and a penalty term that is linear to the number of change points. The resulting model can be queried for anomalously long periods of time with no events, i.e., waiting times, by defining a threshold below which the waiting time observations are deemed anomalies. The first experimental part of the thesis focuses on model selection, i.e., in finding a penalty value that results in the change point detection algorithm detecting the true changes in the intensity of the arrivals of the events while not reacting to random fluctuations in the data. In the second experimental part the performance of the anomaly detection methodology is measured against stock-out data, which gives an approximate ground truth for the termination of a POS event sequence. The performance of the anomaly detector is found to be subpar in terms of precision and recall, i.e., the true positive rate and the positive predictive value. The number of false positives remains high even with small threshold values. This needs to be taken into account when considering applying the anomaly detection procedure in practice. Nevertheless, the methodology may have practical value in the retail setting, e.g., in guiding the store personnel where to focus their resources in ensuring the availability of the products.
  • Torppa, Tuomo (2021)
    User-centered design (UCD) and agile software development (ASDP) both answer separate answers for issues modern software development projects face, but no direct guidelines on how to implement both in one project exist. Relevant literature offers multiple separate detailed techniques, but the applicability of the techniques is dependant on multiple features of the development team, e.g., personnel and expertise available and the size of the team. In this thesis, we propose a new agile development process model, which is created through evaluating the existing UCD–ASDP combination methods suggested in current literature to find the most suitable application methods to the case this study is applied to. In this new method, the development team is taken to do their daily work physically near to the software’s end- users for a short period of time to make the software team as easily accessible as possible. This method is then applied within an ongoing software project for a two week period in which the team visits two separate locations where end-users have the possibility to meet the development team. This introduced "touring" method ended up offering the development team a valuable under-standing of the skill and involvement level of the end-users they met without causing significant harm to the developer experience. The end-users were pleased with the visits and the method gained support and suggestions for future applications.
  • Pandey, Bivek (2024)
    The concept of digital twins, proposed over a decade ago, has recently gathered increasing attention from both industry and academia. Digital twins are real-time or near-real-time simulations of their physical counterparts and can be implemented across various sectors. In mobile networks, digital twins are valuable for maintenance, long-term planning, and expansion by simulating the effects of new infrastructure and technology upgrades. This capability enables network operators to make informed investment and growth decisions. Challenges in implementing digital twins for mobile networks include resource limitations on mobile devices and scaling the system to a broader level. This thesis introduces a modular and flexible architecture for representing network signals from mobile devices within a digital twin environment. It also proposes a suitable platform for digital twins of mobile network signals and resource-efficient protocols for data transmission. The focus is on developing solutions that ensure scalable and resource-efficient synchronization of real or near-real-time data between digital twins and their physical counterparts. The architecture was evaluated through performance testing in two setups: one where data preprocessing occurs on the devices, and another where preprocessing is entirely offloaded to the digital twin platform. Additionally, scalability was assessed by analyzing the platform's ability to handle connections and data transfer from multiple devices simultaneously. The results demonstrate the system's effectiveness and scalability, providing insights into its practical application in real-world scenarios. These findings underscore the potential for widespread adoption and further development of digital twin technologies in mobile networks.
  • Häppölä, Niko (2024)
    Introduction: EU medical device regulation (MDR) sets requirements for medical device software (MDSW) development. Following international standards, such as IEC 62304 and IEC 82304-1, is considered best practice to ensure compliance with regulation. At first glance, MDR and standards seem counter-intuitive to the DevOps approach. DevOps has been successful in regular software development, and it could improve MDSW development. In addition, standalone software is more prevalent as a medical device and as software does not need to be embedded into a physical device, the DevOps approach should be more feasible. Methods: In this thesis, a systematic approach of multivocal literature review was conducted. The goal is to find the state-of-the-art of DevOps in MDSW development, what DevOps techniques and practices are suggested by academic literature and industry experiences, and what the challenges and benefits of DevOps are in MDSW. 18 scientific articles and 10 sources of gray literature were analyzed. Results: The DevOps benefits of improved quality and faster release cycle can be achieved up to a certain point. Regulations prevent Continuous Deployment, but Continuous Integration (CI) and Continuous Delivery (CD) are possible. The most promising improvements can be made by automated documentation creation and bringing tasks of regulatory experts and developers closer together by streamlining the regulatory process. Existing DevOps tools can be extended to support compliance requirements. Third-party platforms and AI/ML solutions remain problematic due to regulations.
  • Sokkanen, Joel (2023)
    DevOps software development methodologies have steadily gained ground over the past 15 years. Properly implemented DevOps enables the software to be integrated and deployed at a rapid pace. The implementation of DevOps practices create pressure for software testing. In the world of fast-paced integrations and deployments, software testing must perform its quality assurance function quickly and efficiently. The goal of this thesis was to identify the most relevant DevOps software testing practices and their impact on software testing. Software testing in general is a widely studied topic. This thesis looks into the recent develop- ments of software testing in DevOps. The primary sources of this study consist of 15 academic papers, which were collected with the systematic literature review source collection methodolo- gies. The study combines both systematic literature review and rapid review methodologies. The DevOps software testing practices associated with high level of automation, continuous testing and DevOps culture adoption stood out in the results. These were followed by the practices highlighting the need for flexible and versatile test tooling and test infrastructures. DevOps adoption requires the team composition and responsibilities to be carefully planned. The selected testing practices should be carefully chosen. Software testing should be primarily organized in highly automated DevOps pipelines. Manual testing should be utilized to validate the results of the automatic tests. Continuous testing, multiple testing levels and versatile test tooling should be utilized. Integration and regression testing should be run on all code changes. Application monitoring and the collection of telemetry data should be utilized to improve the tests.
  • Sarapalo, Joonas (2020)
    The page hit counter system processes, counts and stores page hit counts gathered from page hit events from a news media company’s websites and mobile applications. The system serves a public application interface which can be queried over the internet for page hit count information. In this thesis I will describe the process of replacing a legacy page hit counter system with a modern implementation in the Amazon Web Services ecosystem utilizing serverless technologies. The process includes the background information, the project requirements, the design and comparison of different options, the implementation details and the results. Finally, I will show how the new system implemented with Amazon Kinesis, AWS Lambda and Amazon DynamoDB has running costs that are less than half of that of the old one’s.
  • Riihiaho, Anni (2024)
    Digitaalinen transformaatio tarkoittaa prosessia, jossa organisaatiot reagoivat ympäristössään tapahtuviin muutoksiin käyttämällä uusia digitaalisia teknologioita muuttamaan arvonluonti-prosessejaan pysyäkseen kilpailukykyisinä. Tämän hetken digitaalisen transformaation tärkeimpinä teknologisina ajureina pidetään tekoälyä, lohkoketjua, pilviteknologiaa ja data-analytiikkaa. Tutkielman tarkoituksena oli selvittää kirjallisuuden avulla, miten digitaalinen transformaatio vaikuttaa ohjelmistokehitykseen. Menetelmänä käytettiin systemaattista kirjallisuuskatsausta, sillä se tarjoaa protokollan, jota noudattamalla voidaan vähentää katsaukseen liittyviä vinoumia. Tulosten mukaan digitaalinen transformaatio tarkoittaa ohjelmistokehityksen kannalta eri alojen digitaalisen transformaation toteuttamista, uusien teknologioiden mukaantuloa ja automaation lisääntymistä. Digitaalinen transformaatio muuttaa ohjelmistokehitystä muuttamalla osaamistarpeita ja työnjakoa, datasta on tulossa yhä tärkeämpää ja pilvestä hallitseva alusta ohjelmistoille. Digitaalisen transformaation vuoksi tarvittavista taidoista tärkeimpiä ovat tekoäly, data-analytiikka, tietoturva, toimialaosaaminen, matalan koodin alustat, pilvialustat, kyberfyysiset järjestelmät ja esineiden internet. Digitaalinen transformaatio vaikuttaa ohjelmistokehitykseen muiden alojen digitaalisen transformaation toteuttamisen kautta, jolloin ohjelmistokehittäjiltä vaaditaan kykyä toteuttaa asiakkaiden haluamia ohjelmistoja, joka voi vaatia uusien teknologioiden osaamista. Se vaikuttaa myös ohjelmistoalaan ohjelmistokehityksen muuttuessa yhä enemmän automatisoiduksi. Tulevaisuudessa tekoälyn rooli ohjelmistokehityksessä kasvaa.
  • Duong, Quoc Quan (2021)
    Discourse dynamics is one of the important fields in digital humanities research. Over time, the perspectives and concerns of society on particular topics or events might change. Based on the changing in popularity of a certain theme different patterns are formed, increasing or decreasing the prominence of the theme in news. Tracking these changes is a challenging task. In a large text collection discourse themes are intertwined and uncategorized, which makes it hard to analyse them manually. The thesis tackles a novel task of automatic extraction of discourse trends from large text corpora. The main motivation for this work lies in the need in digital humanities to track discourse dynamics in diachronic corpora. Machine learning is a potential method to automate this task by learning patterns from the data. However, in many real use-cases ground truth is not available and annotating discourses on a corpus-level is incredibly difficult and time-consuming. This study proposes a novel procedure to generate synthetic datasets for this task, a quantitative evaluation method and a set of benchmarking models. Large-scale experiments are run using these synthetic datasets. The thesis demonstrates that a neural network model trained on such datasets can obtain meaningful results when applied to a real dataset, without any adjustments of the model.
  • Harhio, Säde (2022)
    The importance of software architecture design decisions has been known for almost 20 years. Knowledge vaporisation is a problem in many projects, especially in the current fast-paced culture, where developers often switch from project to another. Documenting software architecture design decisions helps developers understand the software better and make informed decisions in the future. However, documenting architecture design decisions is highly undervalued. It does not create any revenue in itself, and it is often the disliked and therefore neglected part of the job. This literature review explores what methods, tools and practices are being suggested in the scientific literature, as well as, what practitioners are recommending within the grey literature. What makes these methods good or bad is also investigated. The review covers the past five years and 36 analysed papers. The evidence gathered shows that most of the scientific literature concentrates on developing tools to aid the documentation process. Twelve out of nineteen grey literature papers concentrate on Architecture Decision Records (ADR). ADRs are small template files, which as a collection describe the architecture of the entire system. The ADRs appear to be what practitioners have become used to using over the past decade, as they were first introduced in 2011. What is seen as beneficial in a method or tool is low-cost and low-effort, while producing concise, good quality content. What is seen as a drawback is high-cost, high-effort and producing too much or badly organised content. The suitability of a method or tool depends on the project itself and its requirements.
  • Bankowski, Victor (2021)
    WebAssembly (WASM) is a binary instruction format for a stack-based virtual machine originally designed for the Web but also capable of being run on outside of the browser contexts. The WASM binary format is designed to be fast to transfer, load and execute. WASM programs are designed to be safe to execute by running them in a memory safe sandboxed environment. Combining dynamic linking with WebAssembly could allow the creation of adaptive modular applications that are cross-platform and sandboxed but still fast to execute and load. This thesis explores implementing dynamic linking in WebAssembly. Two artifacts are presented: a dynamic linking runtime prototype which exposes a POSIX-like host function interface for modules and an Android GUI interfacing prototype built on top of the runtime. In addition the results of measurements which were performed on both artefacts are presented. Dynamic linking does improve the memory usage and the startup time of applications when only some modules are needed. However if all modules are needed immediately then dynamic linked applications. perform worse than statically linked applications. Based on the results, dynamically linking WebAssembly modules could be a viable technology for PC and Android. The poor performance of A Raspberry Pi in the measurements indicates that dynamic linking might not be viable for resource contrained system especially if applications are performance critical.
  • Sinikallio, Laura (2022)
    Parlamentaaristen aineistojen digitointi ja rakenteistaminen tutkimuskäyttöön on nouseva tutkimuksenala, jonka tiimoilta esimerkiksi Euroopassa on tällä hetkellä käynnissä useita kansallisia hankkeita. Tämä tutkielma on osa Semanttinen parlamentti -hanketta, jossa Suomen eduskunnan täysistuntojen puheenvuorot saatetaan ensimmäistä kertaa yhtenäiseksi, harmonisoiduksi aineistoksi koneluettavaan muotoon aina eduskunnan alusta vuodesta 1907 nykypäivään. Puheenvuorot ja niihin liittyvät runsaat kuvailutiedot on julkaistu kahtena versiona, parlamentaaristen aineistojen kuvaamiseen käytetyssä Parla-CLARIN XML -formaatissa sekä linkitetyn avoimen datan tietämysverkkona, joka kytkee aineiston osaksi laajempaa kansallista tietoinfrastruktuuria. Yhtenäinen puheenvuoroaineisto tarjoaa ennennäkemättömiä mahdollisuuksia tarkastella suomalaista parlamentarismia yli sadan vuoden ajalta monisyisesti ja automatisoidusti. Aineisto sisältää lähes miljoona erillistä puheenvuoroa ja linkittyy tiiviisti eduskunnan toimijoiden biografisiin tietoihin. Tässä tutkielmassa kuvataan puheenvuorojen esittämistä varten kehitetyt tietomallit ja puheenvuoroaineistojen keräys- ja muunnosprosessi sekä tarkastellaan prosessin ja syntyneen aineiston haasteita ja mahdollisuuksia. Toteutetun aineistojulkaisun hyödyllisyyden arvioimiseksi on Parla-CLARIN-muotoista aineistoa jo hyödynnetty poliittiseen kulttuuriin liittyvässä digitaalisten ihmistieteiden tutkimuksessa. Linkitetyn datan pohjalta on kehitetty semanttinen portaali, Parlamenttisampo, aineistojen julkaisemista ja tutkimista varten verkossa.
  • Talonpoika, Ville (2020)
    In recent years, virtual reality devices have entered the mainstream with many gaming-oriented consumer devices. However, the locomotion methods utilized in virtual reality games are yet to gain a standardized form, and different types of games have different requirements for locomotion to optimize player experience. In this thesis, we compare some popular and some uncommon locomotion methods in different game scenarios. We consider their strengths and weaknesses in these scenarios from a game design perspective. We also create suggestions on which kind of locomotion methods would be optimal for different game types. We conducted an experiment with ten participants, seven locomotion methods and five virtual environments to gauge how the locomotion methods compare against each other, utilizing game scenarios requiring timing and precision. Our experiment, while small in scope, produced results we could use to construct useful guidelines for selecting locomotion methods for a virtual reality game. We found that the arm swinger was a favourite for situations where precision and timing was required. Touchpad locomotion was also considered one of the best for its intuitiveness and ease of use. Teleportation is a safe choice for games not requiring a strong feeling of presence.
  • Harjunpää, Jonas (2022)
    Ohjelmistotuotannon ammattilaiset tarvitsevat monenlaisia kompetensseja. Yksi näistä kompetensseista on kyky elinikäiseen oppimiseen, joka on tarpeellinen laajalla ja jatkuvasti muutoksessa olevalla alalla. ICT-aloille muodostuneen osaajatarpeen myötä elinikäisen oppimisen rooli onkin alkanut korostumaan entisestään. Tutkielman tarkoituksena on ollut lisätä ymmärrystä elinikäisen oppimisen roolista ohjelmistotuotannon ammattilaisen näkökulmasta. Tutkielmassa on pyritty tunnistamaan, mitä oppimisen muotoja hyödynnetään sekä millaisiin tarkoituksiin niitä käytetään, mitkä elinikäisen oppimisen kompetenssin osatekijät ovat tärkeitä sekä mitä haasteita elinikäiseen oppimiseen liittyy. Tutkimuksen aineisto on kerätty puolistrukturoiduilla haastatteluilla ohjelmistotuotannon ammattilaisten kanssa. Näiden haastattelujen tuloksia on verrattu tutkielmaa varten suoritetun kirjallisuuskatsauksen tuloksiin. Oppimisen muodoista informaalia oppimista hyödynnetään eniten ja erityisesti pienempiin oppimistarpeisiin. Nonformaalia ja formaalia oppimista taas hyödynnetään isompiin tarpeisiin, mutta harvemmin. Motivaatio, tiedonhaku ja metaoppiminen korostuvat keskeisinä elinikäisen oppimisen kompetenssin osatekijöinä. Ajanpuute ja itsensä motivoiminen mielletään yleisimmiksi haasteiksi elinikäistä oppimista koskien. Myös tiedonlähteisiin liittyvät puutteet sekä puutteellinen ymmärrys metaoppimisesta mielletään vaikeuttavan elinikäistä oppimista. Tutkielman havainnot tukevat elinikäisen oppimisen kompetenssin keskeistä roolia ohjelmistotuotannon ammattilaisilla. Kehitettävää löytyy kuitenkin vielä ohjelmistotuotannon ammattilaisten valmiuksista elinikäiseen oppimiseen, esimerkiksi metaoppimista koskien. Havainnot perustuvat kuitenkin lyhyemmän aikaa ohjelmistotuotannon ammattilaisina työskennelleiden kokemuksiin, joten lisää tutkimusta tarvitaan etenkin pitempään työskennelleiltä ohjelmistotuotannon ammattilaisilta.
  • Walder, Daniel (2021)
    Cloud vendors have many data centers around the world and offer in each data center possibilities to rent computational capacities with different prices depending on the needed power and time. Most vendors offer flexible pricing, where prices can change hourly, for instance, Amazon Web Services. According to those vendors, price changes depend highly on the current workload. The more workload, the pricier it is. In detail, this paper is about the offered spot services. To get the most potential out of this flexible pricing, we build a framework with the name ELMIT, which stands for Elastic Migration Tool. ELMIT’s job is to perform price forecasting and eventually perform migrations to cheaper data centers. In the end, we monitored seven spot instances with ELMIT’s help. For three instances no migration was needed, because no other data center was ever cheaper. However, for the other four instances ELMIT performed 38 automatic migrations within around 42 days. Around 160$ were saved. In detail, three out of four instances reduced costs by 14.35%, 4.73% and 39.6%. The fourth performed unnecessary migrations and cost at the end more money due to slight inaccuracies in the predictions. In total, around 50 cents more. Overall, the outcome of ELMIT’s monitoring job is promising. It gives reason to keep developing and improving ELMIT, to increase the outcome even more.
  • Rohamo, Paavo (2024)
    The fast development cycles of Web User Interfaces (UI) create challenges for test automation to keep up with the changes in web elements. Test automation may suffer from test breakages after developers update the UIs of the System Under Test (SUT). Test breakages are not defects or bugs in the SUT, but a failure in test automation code. Failing to correctly locate web element from the UI, is one of the key reasons for test breakages to occur. Prior work to gain self-healing of element locators, has been traditionally done with different algorithms and recently with the help of Large Language Models (LLMs). This thesis aims to discover how to enable self-healing locators for Robot Framework Web UI tests, are there some web element locator types that are more easily repaired than others, and which LLMs should be used for this task. An experimental study was conducted for enabling self-healing locators for Robot Framework. Custom Robot Framework library was created with Python, which was tested for eight different locator strategies raised from locator breakage taxonomy. Results show that the best performance in self-healing locators is gained by using the bigger LLMs. GPT-4 Turbo and Mistral Large showed the best performance accuracy by repairing 87,5% of the locators in the Robot Framework test cases. The worst performer was Mistral 7B Instruct which was not able to correct any locators. Using LLMs for self-healing locators in Robot Framework tests is possible. To get the best results for self-healing locators, my results suggest that practitioners should focus on LLM prompt designing, in the usage of candidate algorithm with locator version history and use the biggest LLMs available if possible.
  • Ikonen, Eetu (2023)
    The maximum constraint satisfaction problem (MaxCSP) is a combinatorial optimization problem in which the set of feasible solutions is expressed using decision variables and constraints on how the variables can be assigned. It can be used to represent a wide range of other combinatorial optimization problems. The maximum satisfiability problem (MaxSAT) is a restricted variant of the maximum constraint satisfaction problem with the additional restrictions that all variables must be Boolean variables, and all constraints must be logical Boolean formulas. Because of this, expressing problems using MaxSAT can be unintuitive. The known solving methods for the MaxSAT problem are more efficient than the known solving methods for MaxCSP. Therefore, it is desirable to express problems using MaxSAT. However, every MaxCSP instance that only has finite-domain variables can be encoded into an equivalent MaxSAT instance. Encoding a MaxCSP instance to a MaxSAT instance allows users to combine the strengths of both approaches by expressing problems using the more intuitive MaxCSPs but solving them using the more efficient MaxSAT solving methods. In this thesis, we overview three common MaxCSP to MaxSAT encodings, the sparse, log, and order encodings, that differ in how they encode an integer variable into a set of Boolean variables. We use correlation clustering as a practical example for comparing the encodings. We first represent correlation clustering problems using MaxCSPs, and then encode them into MaxSATs instances. State-of-the-art MaxSAT solvers are then used to solve the MaxSAT instances. We compare the encodings by measuring the time it takes to encode a MaxCSP instance into a MaxSAT instance and the time it takes to solve the MaxSAT instance. The scope of our experiments is too small to draw general conclusions but in our experiments, the log encoding was the best overall choice.