Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by master's degree program "Magisterprogrammet i datavetenskap"

Sort by: Order: Results:

  • Kukkamäki, Mikael Valter (2024)
    The rapid growth and increased requirements within the game development process have made it largely dependent on time, effort and complexity. Game engines were developed to reduce these constraints, by providing game developers with useful features and tools. However, there also lies a deep problem, which is the dependency formed between game developers and the engine administration, the upper management who controls the engine. This dependency relies on the engine administration to maintain their engine without causing harm towards their users. A major conflict happened in the autumn of 2023 between the indie development community and Unity Technologies. Motivated by this conflict we set goals within this thesis to assess the relationship and dependency between game developers and engine administration, and to provide awareness towards trust issues and the impact towards the game industry. In this thesis, we approach this problem with three methods: survey, interviews, and a case study. Development teams participated in the survey, from which three developers were selected for interviews. The game developers described multiple events that formed the basis for the case study, which focused on the past events within Unity Engine. The results show that Unity has considerable value for game developers, but trust in Unity has been significantly impacted by the recent actions of its administration. Developers were met with serious considerations to change their game engine, whether the administration would not regain their lost trust. Despite this, the developers still hope that the engine recovers and the administration takes action to regain their trust. As a conclusion, we emphasize the interaction between game developers and engine administration, what would lead to their mutual interest, in other words creating games.
  • Ture, Tsegaye (2021)
    The introductory section of the thesis discusses on the European General Data Protection Regulation, abbreviated GDPR, background information and historical facts. The second section covers basic concepts of personal data and GDPR enforcement. The third section gives detailed analysis on data subject rights as well as best practices for GDPR compliance to avoid penalties. The fourth section concentrates on the technical aspects of the right to be forgotten, solely concentrating on the technical aspects of permanent erasure/deletion of personal or corporate data in compliance with the customer’s desire. Permanent deletion or erasure of data, technically addressing the issue of the right to be forgotten and block chain network technology are the main focus areas of the thesis. The fifth section of the thesis profoundly elaborates block chain and the relation with GDPR compliance in particular. Then the thesis resumes explaining about security aspects and encryption, confidentiality, integrity and availability of data as well as authentication, authorization and auditing mechanisms in relation to the GDPR. The last section of the thesis is the conclusion and recommendation section which briefly summarizes the entire discussion and tries to suggest further improvements
  • Steenari, Jussi (2023)
    Ship traffic is a major source of global greenhouse gas emissions, and the pressure on the maritime industry to lower its carbon footprint is constantly growing. One easy way for ships to lower their emissions would be to lower their sailing speed. The global ship traffic has for ages followed a practice called "sail fast, then wait", which means that ships try to reach their destination in the fastest possible time regardless and then wait at an anchorage near the harbor for a mooring place to become available. This method is easy to execute logistically, but it does not optimize the sailing speeds to take into account the emissions. An alternative tactic would be to calculate traffic patterns at the destination and use this information to plan the voyage so that the time at anchorage is minimized. This would allow ships to sail at lower speeds without compromising the total length of the journey. To create a model to schedule arrivals at ports, traffic patterns need to be formed on how ships interact with port infrastructure. However, port infrastructure is not widely available in an easy-to-use form. This makes it difficult to develop models that are capable of predicting traffic patterns. However, ship voyage information is readily available from commercial Automatic Information System (AIS) data. In this thesis, I present a novel implementation, which extracts information on the port infrastructure from AIS data using the DBSCAN clustering algorithm. In addition to clustering the AIS data, the implementation presented in this thesis uses a novel optimization method to search for optimal hyperparameters for the DBSCAN algorithm. The optimization process evaluates possible solutions using cluster validity indices (CVI), which are metrics that represent the goodness of clustering. A comparison with different CVIs is done to narrow down the most effective way to cluster AIS data to find information on port infrastructure.
  • Paakkola, Kalle (2024)
    SQLite has been called the most widely deployed database system but its use in web services has been somewhat limited compared to client/server database engines. Today due to the continued development of SQLite it has the needed features to be a serious option for certain kinds of web services. SQLite is also the technology behind many emerging globally distributed database technologies. In a case study, an existing web application that is backed by centralized SQLite is evaluated in the context of what trade-offs would have to be made when switching to a globally distributed database. This is done by benchmarking the difference in latency users experience depending on their geographical location. Outside of that known challenges in the context of distributed computing as well as challenges specific to migrating from a centralized embedded database to a globally distributed one are evaluated as well. In the results, we found out that there are latency improvements to be made with the globally distributed approach. That said optimizing application code is likely to be the most effective latency improvement for many projects. In addition, especially the increased complexity of running a distributed system compared to a centralized one was in our estimations a big rea- son why the application being studied ultimately decided not to migrate towards a globally distributed deployment. Our findings relate primarily to this one application and other applications with different cir- cumstances could come to a different answer. These technologies are still rapidly advancing so it is likely we will see continued development and properties of these globally distributed database technologies evolving.
  • Jylhä-Ollila, Pekka (2020)
    K-mer counting is the process of building a histogram of all substrings of length k for an input string S. The problem itself is quite simple, but counting k-mers efficiently for a very large input string is a difficult task that has been researched extensively. In recent years the performance of k-mer counting algorithms have improved significantly, and there have been efforts to use graphics processing units (GPUs) in k-mer counting. The goal for this thesis was to design, implement and benchmark a GPU accelerated k-mer counting algorithm SNCGPU. The results showed that SNCGPU compares reasonably well to the Gerbil k-mer counting algorithm on a mid-range desktop computer, but does not utilize the resources of a high-end computing platform as efficiently. The implementation of SNCGPU is available as open-source software.
  • Nurmivaara, Sami (2023)
    Introduction: The issue of climate change has emerged as a global challenge in response to the increasing consumption of natural resources. As the Information Technology (IT) sector has undergone significant growth in recent years, the implementation of environmentally sustainable practices which lower the environmental impact of software, such as electricity usage, has become imperative. The concept of green in software engineering seeks to address these challenges in the software engineering process. Methods: As the goal is to explore and evaluate different approaches to environmental sustainability in green in software engineering whilst also taking a look into the maturity and evidence level of research about the subject, this study adopts a systematic literature review approach. The search strings, search process and other relevant information are meticulously documented and explored in each step of the research process. Results: Green in software engineering has been identified as a promising field of research, but the absence of agreed-upon definitions and terminology often leads to research efforts replicating the previous studies without a clear reason as to why. The goal of increasing environmental sustainability is commonly agreed on in software engineering, but the concrete steps to achieve it are currently missing. Building a strong body of knowledge, common measurements and tooling to support them and increasing the knowledge about sustainability in the field of software engineering should all be taken into account in an effort to reach the environmental sustainability goals of tomorrow.
  • Mehtälä, Harri Eerik Jalmari (2023)
    Background: The production, operation and use of information technology (IT) have a significant impact on the environment. As an example, the estimated footprint of global greenhouse gas emissions of the IT industry, including the production, operation and maintenance of main consumer devices, data centres and communication networks, doubled between 2007 (1–1.6%) and 2016 (2.5–3.1%). The European Union regulates the energy efficiency of data centre hardware. However, there is still a lack of regulation and guidance regarding the environmental impacts of software use, i.e. impacts from the production, operation and disposal of hardware devices required for using software. Aims: The goal of this thesis is to provide actionable knowledge which could be used by software practitioners aiming to reduce the environmental impacts of software use. Method: We conducted a systematic literature review of academic literature where we assessed evidence of the effectiveness of tools, methods and practices for reducing the environmental impacts of software use. The review covers 20 papers. Results: 60% of studied papers focus on reducing the energy consumption of software that is executed on a single local hardware device, which excludes networked software. The results contain 6 tools, 25 methods and 11 practices. Program code optimisation can potentially reduce the energy consumption of software use by 2–62%. Shifting the execution time of time-flexible data centre workloads towards times when the electric grid has plenty of renewable electricity can potentially reduce data centre CO2 emissions by 33.7%. Conclusions: The results suggest that the energy consumption of software use has received much attention in research. We suggest more research to be done on environmental impacts other than energy consumption, such as CO2 emissions, software-induced hardware obsolescence, electronic waste and freshwater consumption. Practitioners should also take into account the potential impacts of data transmission networks and remote hardware, such as data centres, in addition to local hardware.
  • Lahtela, Aurora (2022)
    Toimijamalli on hajautetun ja samanaikaisen laskennan malli, jossa pienet osat ohjelmistoa viestivät keskenään asynkronisesti ja käyttäjälle näkyvä toiminnallisuus on usean osan yhteistyöstä esiin nouseva ominaisuus. Nykypäivän ohjelmistojen täytyy kestää valtavia käyttäjämääriä ja sitä varten niiden täytyy pystyä nostamaan kapasiteettiaan nopeasti skaalautuakseen. Pienempiä ohjelmiston osia on helpompi lisätä kysynnän mukaan, joten toimijamalli vaikuttaa vastaavan tähän tarpeeseen. Toimijamallin käytössä voi kuitenkin esiintyä haasteita, joita tämä tutkimus pyrkii löytämään ja esittelemään. Tutkimus toteutetaan systemaattisena kirjallisuuskatsauksena toimijamalliin liittyvistä tutkimuksista. Valituista tutkimuksista kerättiin tietoja, joiden pohjalta tutkimuskysymyksiin vastattiin. Tutkimustulokset listaavat ja kategorisoivat ohjelmistokehityksen ongelmia, joihin käytettiin toimijamallia, sekä erilaisia toimijamallin käytössä esiintyviä haasteita ja niiden ratkaisuita. Tutkimuksessa löydettiin toimijamallin käytössä esiintyviä haasteita ja näille haasteille luotiin uusi kategorisointi. Haasteiden juurisyitä analysoidessa havaittiin, että suuri osa toimijamallin haasteista johtuvat asynkronisen viestinnän käyttämisestä, ja että ohjelmoijan on oltava jatkuvasti tarkkana omista oletuksistaan viestijärjestyksestä. Haasteisiin esitetyt ratkaisut kategorisoitiin niihin liittyvän lisättävän koodin sijainnin mukaan.
  • Seppänen, Jukka-Pekka (2021)
    Helsingin yliopiston hammaslääketieteellisen koulutusohjelman suoritteita seurataan erinäisin Excel-taulukoin ja paperisin lomakkein. Suoritteet ovat osa opiskelijan kehittymistä kohti työelämää ja vaadittavien suoritteiden suorittamisen jälkeen opiskelijoille myönnetään oikeus toimia hammaslääkärin tehtävissä. Nykyisen järjestelmän ongelmana on opiskelijoiden tutkinnon kehittymisen seurannan vaikeus, sekä opiskelijan näkökulmasta oman oikeusturvan toteutuminen. Excel-taulukoiden julkinen näkyvyys opiskelijoiden keskuudessa mahdollistaa väärinkäytön, jossa opiskelija muuttaa toisen opiskelijan suoritteiden tietoja. Tässä tutkielmassa tutkitaan arkkitehtuurisia ratkaisuja, joilla suoriteseuranta voidaan tulevaisuudessa digitalisoida. Tutkielman lopputuloksena suositellaan järjetelmälle käytettävä tietokanta sekä sovellusarkkitehtuurimalli. Koska järjestelmässä käyttäjämäärä on rajattu hyvin pieneksi ja järjestelmän käyttö on satunnaista, ei järjestelmän tarvitse olla kovinkaan skaalautuva. Opiskelijan oikeusturvan kannalta on olennaista, että jokainen opiskelijan tekemä suorite tallennetaan kantaan ja kannan tila pysyy vakaana koko järjestelmän elinkaaren ajan. Tämän takia on suositeltavaa valita relaatiopohjainen tietokanta kuten PostgreSQL, joka tukee relaatiomallin lisäksi joustavia dokumenttitietokannasta tuttuja rakenteita. Arkkitehtuurimalliksi järjestelmään on suositeltavaa käyttää joko monoliittimallia, jossa järjestelmä toteutetaan yhden rajapinnan päälle, tai vaihtoehtoisesti mikropalveluina, jossa järejstelmä on jaettu kolmeen eri mikropalveluun.
  • Kone, Damian (2021)
    Computer systems are often distributed across a network to provide services to the end-users. These systems must be available and provide services to the users when required. For this reason, high availability system technologies have captured the attention of IT-organizations. Most companies consider it important to provide continuous services with minimal downtime to the end-users. The implementation of service availability is a complex task with multiple constraints, including security, performance, and system scalability. The first chapter of the thesis introduces the high availability system and the objectives of the thesis. In the second, third, fourth and fifth chapters, concepts, redundancy models, clusters and containers are described. In the sixth chapter, an approach to measure the availability of the components of IT-system using the Application Availability Measurement method is provided. The seventh and eighth chapters contain a case study. The seventh chapter contains actual backup system design overview, issues related to the current methods and tools to measure the availability used by a Finnish software company. In the eighth chapter, as part of the case study, a solution design is proposed based on the principle of service delivery decomposition into a set of measurement points for service level indicators. A plan is provided to show how to implement a method and provide tools to measure the availability of the backup system used by a Finnish software company.
  • Kähkönen, Harri (2023)
    The volume of data generated by high-throughput DNA sequencing has grown to a magnitude that leads to substantial computational challenges in storing and searching the data. To tackle this problem, various computational methodologies have been developed in recent years to space-efficiently index collections of data sets and enable efficient searches. One of the most recent indexing methods, Spectral Burrows-Wheeler Transform (SBWT), presents all distinct k-mers of a DNA sequence using only 4 bits and a small additional space for the rank data structures per k-mer. In addition to being space-efficient, it also enables k-mer membership queries in linear time relative to k, and constant time relative to the number of distinct k-mers in the sequence. The queries rely on rank queries over bit vectors. Experiments run on a single CPU thread have shown that in one second, hundreds of thousands of k-mer membership queries can be performed over SBWT. By parallelizing the queries on a CPU, it is possible to execute millions of queries per second. However, Graphic Processing Units (GPUs) have much more parallelization potential. The main contribution of the thesis is an implementation of the k-mer membership queries over SBWT with GPU computing. Optimizing the queries to be performed on a GPU made it possible to perform over a billion queries per second. Furthermore, the thesis presents a new enhancement for the queries over SBWT called presearching, which doubles the speed of the original SBWT search query. The rank query needed for the membership queries is implemented using space-efficient poppy rank data structures, and its derivative cumulative-poppy data structure which is one of the contributions of the thesis.
  • Tahvanainen, Minka (2023)
    This thesis explores the Ionic Framework, a popular open-source framework for building hybrid mobile applications using web technologies. The first part of the thesis briefly compares hybrid applications, native applications, and PWAs (Progressive Web Apps). The thesis includes a concise overview of the Ionic Framework through a literature review and a guide on how to use Ionic with React from installation to distribution, providing practical tips and recommendations for developers new to the framework. This part includes a discussion on the tools and technologies required to develop with Ionic, as well as using Ionic UI components and styling them. In addition, this thesis documents an Ionic project and workshop conducted for Knowit Solutions Oy. The project involved building a proof-of-concept mobile application using Ionic, while the workshop provided hands-on training for JavaScript developers on how to use the framework effectively. This thesis aims to introduce the Ionic Framework and demonstrate its usefulness for building mobile applications for iOS and Android with a single codebase.
  • Franssila, Fanni (2023)
    Magnetic reconnection is a phenomenon occurring in plasma and related magnetic fields when magnetic field lines break and rejoin, leading to the release of energy. Magnetic reconnections take place, for example, in the Earth’s magnetosphere, where they can affect the space weather and even damage systems and technology on and around the Earth. Another site of interest is in fusion reactors, where the energy released from reconnection events can cause instability in the fusion process. So far, 2D magnetic reconnection has been widely studied and is relatively well-understood, whereas the 3D case remains more challenging to characterize. However, in real-world situations, reconnection occurs in three dimensions, which makes it essential to be able to detect and analyse 3D magnetic reconnection, as well. In this thesis, we examine what potential signs of 3D magnetic reconnection can be identified from the topological elements of a magnetic vector field. To compute the topological elements, we use the Visualization Toolkit (VTK) Python package. The topology characterizes the behaviour of the vector field, and it may reveal potential reconnection sites, where the topological elements can change as a result of magnetic field lines reconnecting. The magnetic field data used in this thesis is from a simulation of the nightside magnetosphere produced using Vlasiator. The contributions of this thesis include analysis of the topological features of 3D magnetic reconnection and topological representations of nightside reconnection conditions to use in potential future machine learning approaches. In addition, a modified version of the VTK function for computing the critical points of the topology is created with the purpose of gearing it more towards magnetic vector fields instead of vector fields in general.
  • Kivivuori, Eve (2023)
    In this thesis, we discuss the Relative Lempel-Ziv (RLZ) lossless compression algorithm, our implementation of it, and the performance of RLZ in comparison to more traditional lossless compression programs such as gzip. Like the LZ77 compression algorithm, the RLZ algorithm compresses its input by parsing it into a series of phrases, which are then encoded as a position+length number pair describing the location of the phrase within the text. Unlike ordinary LZ77, where these pairs refer to earlier points in the same text and thus decompression must happen sequentially, in RLZ the pairs point to an external text called the dictionary. The benefit of this approach is faster random access to the original input given its compressed form: with RLZ, we can rapidly (in linear time with respect to the compressed length of the text) begin decompression from anywhere. With non-repetitive data, such as the text of a single book, website, or one version of a program's source code, RLZ tends to perform poorer than traditional compression methods, both in terms of compression ratio and in terms of runtime. However, with very similar or highly repetitive data, such as the entire version history of a Wikipedia article or many versions of a genome sequence assembly, RLZ can compress data better than gzip and approximately as well as xz. Dictionary selection requires care, though, as compression performance relies entirely on it.
  • Blomgren, Roger Arne (2022)
    The cloud computing paradigm has risen, during the last 20 years, to the task of bringing powerful computational services to the masses. Centralizing the computer hardware to a few large data centers has brought large monetary savings, but at the cost of a greater geographical distance between the server and the client. As a new generation of thin clients have emerged, e.g. smartphones and IoT-devices, the larger latencies induced by these greater distances, can limit the applications that could benefit from using the vast resources available in cloud computing. Not long after the explosive growth of cloud computing, a new paradigm, edge computing has risen. Edge computing aims at bringing the resources generally found in cloud computing closer to the edge where many of the end-users, clients and data producers reside. In this thesis, I will present the edge computing concept as well as the technologies enabling it. Furthermore I will show a few edge computing concepts and architectures, including multi- access edge computing (MEC), Fog computing and intelligent containers (ICON). Finally, I will also present a new edge-orchestrator, the ICON Python Orchestrator (IPO), that enables intelligent containers to migrate closer to the users. The ICON Python orchestrator tests the feasibility of the ICON concept and provides per- formance measurements that can be compared to other contemporary edge computing im- plementations. In this thesis, I will present the IPO architecture design including challenges encountered during the implementation phase and solutions to specific problems. I will also show the testing and validation setup. By using the artificial testing and validation network, client migration speeds were measured using three different cases - redirection, cache hot ICON migration and cache cold ICON migration. While there is room for improvements, the migration speeds measured are on par with other edge computing implementations.
  • Kuronen, Arttu (2023)
    Background: Continuous practices are common in today’s software development and the terms DevOps, continuous integration, continuous delivery and continuous deployment are fre- quently used. While each one of the practices helps in making agile development more agile, using them requires a lot of effort from the development team as they are not only about au- tomating tasks but also about how development should be done. Out of the three continuous practices mentioned above, continuous delivery and deployment focus on the deployability of the application. Implementing continuous delivery or deployment is a difficult task, especially for legacy software that can set limitations on how these practices can be taken into use. Aims: The aim of this study is to design and implement a continuous delivery process in a case project that does not have any type of automation regarding deployments. Method: Challenges of the current manual deployment process were identified and based on the identified challenges, a model continuous delivery process was designed. The identified challenges were also compared to the academic literature on the topic and solutions were taken into consideration when the model was designed. Based on the design, a prototype was created that automates the deploy- ment. The model and the prototype were then evaluated to see how it addresses the previously identified challenges. Results: The model provides a more robust deployment process, and the prototype automates most of the bigger tasks in deployment and provides valuable information about the deployments. However, due to the limitations of the architecture, only some of the tasks could be automated. Conclusions: Taking continuous delivery or deployment into use in legacy software is a difficult task, as the existing software sets a lot of limitations on what can be realistically done. However, the results of this study prove that continuous delivery is achievable to some degree even without larger changes to the software itself.
  • Vuorenkoski, Lauri (2024)
    There are two primary types of quantum computers: quantum annealers and circuit model computers. Quantum annealers are specifically designed to tackle particular problems, as opposed to circuit model computers, which can be viewed as universal quantum computers. Substantial efforts are underway to develop quantum-based algorithms for various classical computational problems. The objective of this thesis is to implement algorithms for solving graph problems using quantum annealer computers and analyse these implementations. The aim is to contribute to the ongoing development of algorithms tailored for this type of machine. Three distinct types of graph problems were selected: all pairs shortest path, graph isomorphism, and community detection. These problems were chosen to represent varying levels of computational complexity. The algorithms were tested using the D-Wave quantum annealer Advantage system 4.1, equipped with 5760 qubits. D-Wave provides a cloud platform called Leap and a Python library, Ocean tools, through which quantum algorithms can be designed and run using local simulators or real quantum computers in the cloud. Formulating graph problems to be solved on quantum annealers was relatively straightforward, as significant literature already contains implementations of these problems. However, running these algorithms on existing quantum annealer machines proved to be challenging. Even though quantum annealers currently boast thousands of qubits, algorithms performed satisfactorily only on small graphs. The bottleneck was not the number of qubits but rather the limitations imposed by topology and noise. D-Wave also provides hybrid solvers that utilise both the Quantum Processing Unit (QPU) and CPU to solve algorithms, which proved to be much more reliable than using a pure quantum solver.
  • Kopio, Ville (2023)
    Coupled empowerment maximization (CEM) is an action selection policy for artificial agents. It utilizes the empowerment values of different agents in order to determine the best possible action to take. Empowerment quantifies the potential an agent has to impact the world around it. For example, an agent in an open field has a higher empowerment compared to an agent that is locked in a cage as in an open field the agent has a higher freedom of movement. This kind of action selection policy does not rely on the agent behavior to be explicitly programmed which makes it particularly promising as a non-player character policy for procedurally generated video games. To research the feasibility of CEM agents in practice, they should be studied in a large variety of different situations and games. Some studies have already been performed with a CEM agent that is implemented in the Python programming language. The agent ran in small game environments built on top of the Griddly game engine, but the computational performance of the agent has not been the focus. Scaling the experiments to larger environments using the old implementation is not feasible as conducting the experiments would take an enormous amount of time. Thus, the focus in this thesis is on lowering the time complexity of the agent so that there are more avenues for further research. This is reached with a new CEM agent implementation that (1) has a more modular architecture making future changes easier, (2) simulates future game states with an improved forward model which keeps track of already visited game states, and (3) uses an optimized Griddly version which has improved environment cloning performance. Our approach is around 200 times faster compared to the old implementation using environments and parametrization that potentially are used in future quantitative and qualitative experiments. The old implementation also has some bugs that are now resolved in the new implementation.
  • Kallio, Jarmo (2021)
    Despite benefits and importance of ERP systems, they suffer from many usability problems. They have user interfaces that are complex and suffer from "daunting usability problems". Also, their implementation success rate is relatively low and their usability significantly influences this implementation success. As a company offering an ERP system to ferry operators was planning to renew the user interface of this system in future, we investigated usability of the current system so this could guide future implementation of the new user interface. We studied new and long time users by conducting sessions where the users told about their experiences, performed tasks with the system and filled usability questionnaire (System Usability Scale). Many novice and long time users reported problems. The scores from usability questionnaire show all but two participants perceived the usability of the system as below average and in adjective rating "not acceptable". Two users rated the usability as "excellent". We reasoned that there could be a group of users who use the system in such a way and in such context that they do not experience these problems. The results indicate novices have trouble, for example, navigating and completing tasks. Also some long time users reported navigation issues. The system seems to require that it’s users remember lots of things in order to use it well. The interviews and tasks indicate the system is complex and hard to use and both novices and experts face problems. This is supported by perceived usability scores. While experts could in most cases finish all tasks, during interview some of them reported problems such as finding products the customers needed, error reporting being unclear, configuration being tedious, and need for lots of manual typing, for example. We gave recommendations on what to consider when implementing new user interface for this ERP system. For example, navigation should be improved and users should be provided with powerful search tools. ERP usability is not studied much. Our study supports use of already developed heuristics in classifying usability problems. Our recommendations how to improve usability of the ERP system studied should give some guidelines on what could be done, although not much is backed by laboratory studies. More work is needed in this field to find and test solutions to usability problems users face.
  • Kuparinen, Simo (2023)
    Web development is in great demand these days. Constantly developing technologies enables to create impressive websites and mitigates the amount of development work. However, it is useful to consider the performance aspect, which affects directly to user experience. Performance in this context means website’s load times. Front end web development typically involves using Cascading Style Sheets (CSS) which is a style sheet language and a web technology that is used to describe the visual presentation of a website. This research consist of a literature review part, which contains background knowledge about how web browsers work, performance in general, performance metrics along with CSS performance optimization and an empirical part, which includes different benchmarks presented in major software industry conferences for testing the performance of a certain CSS feature, that have a possibility to improve the performance of the website. The loading times obtained from the benchmarks are reviewed and compared with each other. In addition, a few techniques are presented that do not have their own benchmark, but which may have an effect on performance. To highlight the results, CSS performance is usually not the biggest bottleneck of performance on a website, since the overall style calculation takes about a quarter of the total runtime calculation on average. However, utilizing some particular techniques and managing to shrink the style calculation costs can be valuable. Based on the benchmarks on this research, using shadow DOM and scoped styles have a positive effect on style performance. For layout, performance benefits can be achieved by utilizing CSS containment and concurrent rendering. From other practices, it can be concluded that removing unused CSS, avoiding reflow and repaint along with complex selectors and considering the usage of web fonts a better results can be achieved in terms of performance.