Browsing by study line "Tietoverkot"
Now showing items 1-20 of 20
-
(2022)The continuously evolving cyber threat landscape has become a major concern because sophisticated attacks against systems connected to the Internet have become frequent. The concern is on particular threats that are known as Advanced Persistent Threats (APT). The thesis aims to introduce what APTs are and illustrate other topics under the scope, such as tools and methods attackers can use. Attack models will also be explained, providing example models proposed in the literature. The thesis also introduces which kind of operational objectives attacks can have, and for each objective, one example attack is given that characterizes the objective. In addition, the thesis also uncovers various countermeasures, including most essential security solutions, complemented with more advanced methods. The last countermeasure that the thesis introduces is attribution analysis.
-
(2022)Various Denial of Service (DoS) attacks are common phenomena in the Internet. They can consume resources of servers, congest networks, disrupt services, or even halt systems. There are many machine learning approaches that attempt to detect and prevent attacks on multiple levels of abstraction. This thesis examines and reports different aspects of creating and using a dataset for machine learning purposes to detect attacks in a web server environment. We describe the problem field, origins and reasons behind the attacks, typical characteristics, and various types of attacks. We detail ways to mitigate the attacks and provide a review of current benchmark datasets. For the dataset used in this thesis, network traffic was captured in a real-world setting, and flow records were labeled. Experiments performed include selecting important features, comparing two supervised learning algorithms, and observing how a classifier model trained on network traffic on a specific date performs in detecting new malicious records over time in the same environment. The model was also tested with a recent benchmark dataset.
-
(2022)Blockchain technologies and cryptocurrencies have gained massive popularity in the past few years. Smart contracts extend the utility of these distributed ledgers to distributed state machines, where anyone can store and run code and then mutually agree on the next state. This opens up a whole new world of possibilities, but also many new security challenges. In this thesis we give an up-to-date survey on smart contract security issues. First we give a brief introduction to blockchains and smart contracts and explain the most common attack types and some mitigations against them. Then we sum up and analyse our findings. We find out that many of the attacks could be avoided or at least severely mitigated if the coders followed good coding practices and used design patterns that are proven to be good. Another finding is that changing the underlying blockchain technology to counter the issues is usually not the best way, as it is hard and troublesome to do and might restrict the usability of contracts too much. Lastly, we find out that many new automated tools for security are being developed and used, which indicates movement towards more conventional coding where automated tools like scanners and analysers are being used to cover a large set of security issues.
-
(2022)Software development speed has significantly increased in recent years with methodologies like Agile and DevOps that use automation, among other technics, to enable continuous delivery of new features and software updates to the market. This increased speed has given rise to concerns over guaranteeing security at such a pace. To improve security in today’s fast-paced software development, DevSecOps was created as an extension of DevOps. This thesis focuses on the experiences and challenges of organizations and teams striving to implement DevSecOps. We first view our concepts through existing literature. Then, we conduct an online survey of 37 professionals from both security and development backgrounds. The results present the participants’ overall sentiments towards DevSecOps and the challenges they struggle with. We also investigate what kind of solutions have been tried to mitigate these issues and if these solutions have indeed worked.
-
(2024)Hashing is a one-way encryption method that can be used for data integrity verification, for example, in digital signature systems. The Ssdeep algorithm is a classic context-triggered piecewise hashing function that is commonly used for similarity file check. The input is divided into separate block segments for the signature generation so that the modification of some parts only makes a difference to certain bytes of the signature. This characteristic makes it one of the most popular fuzzy hash algorithms for detecting similar information. Nevertheless, the cryptanalysis of the Ssdeep is missing in previous research. Therefore, in this thesis, we propose collision attack methods based on such vulnerabilities to test the feasibility of using two different inputs to obtain the same signature. Specifically, our objective is to let the attacker add custom comments to a code file, so that the output signature is identical to a previously acknowledge signature by the target system using Ssdeep. In our work, we identify the vulnerabilities in the traditional hash and the rolling hash in the Ssdeep calculation. We further name three useful elements, the reset string, the matching character, and the trigger string, to control the Ssdeep process. Specifically for finding the matching character, we use brute force and modular multiplicative inverse numbers, based on which we further propose two implementation versions: coarse-grained and fine-grained differed in the number of potential solution states. Additionally, we investigate the block size, a parameter which is dependent on the file content length, so that the proposed attack methods can work under various realistic scenarios. We test our work by comprehensive experiments, and the results verify the effectiveness of our methods in collision attacks on the Ssdeep algorithm. Our work gives thoughtful insights into the breaking of data integrity in other fuzzy and context-triggered piecewise hashing algorithms.
-
(2022)Quantum networking is developing fast as an emerging research field. Distributing entangled qubits between any two locations in a quantum network is one of the goals of quantum networking, in which repeaters can be used to extend the length of entanglement. Although researchers focus extensively on problems inside one quantum network, further study on communication between quantum networks is necessary because the next possible evolution of quantum networking is the communication between two or more autonomous quantum networks. In this thesis, we adapted a time slotted model from the literature to study the inter quantum network routing problem. Quantum routing problem can be split into path selection and request scheduling. We focus on the latter considering the previous one received considerable interest in the literature. Five request scheduling policies are proposed to study the impact of preference for certain request types on entanglement generation rate. Experiments also demonstrate other factors should be considered in context of entanglement rate in communication between quantum networks, e.g., the number and distribution of requests and inter-network distance.
-
(2023)Software engineers frequently deal with state machines and protocols while building telecommunications systems. Finite state machines have grown to become an essential tool for designing and implementing networks due to their ability to model complicated behaviour in a structured and efficient manner. They offer a framework for defining systems as a collection of states and transitions, enabling programmers to create software that can respond to a variety of situations and events. This thesis explores the use of finite state machines in network software, exploring their various applications, advantages, and limitations with a focus on cellular technologies and mobile communications. The study covers a wide range of state machine methods, including control structures, Unified Modelling Language, Specification and Description Language, state patterns, state machine frameworks, and code generators. The objectives of the research include a comprehensive review of existing state machine techniques, analysis of their relative merits as well as shortcomings, modelling and implementation of selected methods to evaluate their effectiveness, and identification of required features to meet network requirements. The thesis compares the Boost Meta State Machine against the TeleNokia Specification and Description Language for a case study followed by a feature-based comparison of quality attributes to evaluate their performance in areas such as system design, development, and evolution and maintenance. The results show that the Boost framework is better suited as a state machine implementation technique for most network software application scenarios. Finally, the thesis identifies potential directions for further research and technical approaches to address the issues discussed, highlighting emerging trends and technologies that are likely to shape the future of this important area of network architecture.
-
How industrial automation systems met the Internet – on SCADA communication protocols and security (2024)This thesis discusses supervisory control and data acquisition (SCADA) systems and their communication security. SCADA is an ubiquitous framework used in modern industrial control systems for monitoring and controlling operations technology (OT) equipment. This thesis first briefly covers the history and evolution of SCADA, its modern day applications and most common security vulnerabilities. The rest of the thesis discusses SCADA communication and its security aspects. This thesis goes over the requirements of SCADA system communication and its related security aspects. A more detailed look is taken into SCADA communication protocols and communication security. Additionally some related standards and best practices are introduced. Cyber attacks targeting SCADA systems can have far-reaching real world consequences as shown by several prior known incidents. For this reason research focusing on SCADA communication security is becoming more and more crucial.
-
(2022)The cloud computing paradigm has risen, during the last 20 years, to the task of bringing powerful computational services to the masses. Centralizing the computer hardware to a few large data centers has brought large monetary savings, but at the cost of a greater geographical distance between the server and the client. As a new generation of thin clients have emerged, e.g. smartphones and IoT-devices, the larger latencies induced by these greater distances, can limit the applications that could benefit from using the vast resources available in cloud computing. Not long after the explosive growth of cloud computing, a new paradigm, edge computing has risen. Edge computing aims at bringing the resources generally found in cloud computing closer to the edge where many of the end-users, clients and data producers reside. In this thesis, I will present the edge computing concept as well as the technologies enabling it. Furthermore I will show a few edge computing concepts and architectures, including multi- access edge computing (MEC), Fog computing and intelligent containers (ICON). Finally, I will also present a new edge-orchestrator, the ICON Python Orchestrator (IPO), that enables intelligent containers to migrate closer to the users. The ICON Python orchestrator tests the feasibility of the ICON concept and provides per- formance measurements that can be compared to other contemporary edge computing im- plementations. In this thesis, I will present the IPO architecture design including challenges encountered during the implementation phase and solutions to specific problems. I will also show the testing and validation setup. By using the artificial testing and validation network, client migration speeds were measured using three different cases - redirection, cache hot ICON migration and cache cold ICON migration. While there is room for improvements, the migration speeds measured are on par with other edge computing implementations.
-
(2022)In this thesis, we cover blockchain applications in public administration. First we cover components related to blockchain technology. We cover especially issues related to management of digital evidence, electronic voting, and health data. In the beginning we cover hash functions and the general structure of the blockchain. Then we cover the cryptocurrency Bitcoin as an example of the blockchain technology. The management of the digital evidence is covered by evaluating three published studies. Likewise, the applications related to voting are evaluated in the light of three publications. Lastly, the management of health data is covered by evaluating three publications. For each of the three areas, we present an estimation of the applicability of the blockchain technology, in the form presented in the evaluated publications. Additionally, we cover a few other potential blockchain application areas. Finally, we present the general evaluation of blockchain applicability to the public administration and the conclusion.
-
(2022)Nowadays a growing number of mobile devices are in use, and the internet connections with mobile devices are increasingly important for the everyday life of the global population. As a result, applications and use cases of different requirements including high throughput, reliability and continuous connection have emerged for mobile device connections. Multipath transport located on the transport layer of the TCP/IP model has been proposed as a solution for providing better throughput, reliability and smooth handovers for mobile devices. Multiple network interfaces are present in current mobile devices, and multipath protocols can utilize them to transfer data through multiple paths inside one connection. Multipath protocols of parallel functionality have been proposed over the years, and relevant protocol versions include multipath extensions for well-known transport layer protocols. The aim of the thesis is to provide an overview of three multipath protocols, MPTCP, CMT-SCTP and MPQUIC and the advantages and limitations they have when used for mobile connectivity through a literature review. First the challenges of multipath transport and requirements specific for mobile device usage are identified, and an overview of the protocols and their features are discussed. Then the protocols are compared in the context of the identified challenges and mobile device use. MPTCP is the only transport layer multipath protocol currently deployed and in use, while CMT-SCTP faces problems with deployability. MPQUIC shows promise for having initially comparable performance and deployability with MPTCP. Transport layer multipath protocols are currently not optimal for interactive applications and have suboptimal performance in heterogeneous network conditions. Conversely, they can provide a boost for throughput with data intensive applications and can be helpful for providing smoother handovers, but at the cost of higher energy consumption.
-
(2023)Today’s applications are largely deployed in the cloud and are often implemented using the microservice architecture, which divides a software system into distributed services and provides a solution to multiple issues in monolithic software systems, such as maintainability, scalability, and technology lock. However, industry experts find monitoring microservices a challenge due to the added complexity and distributed nature of microservices. Microservices are typically monitored by intrusive approaches, which incur an added cost of development by requiring instrumentation of the source code or adding monitoring agents. In contrast, the non-intrusive approaches do not. Microservices often communicate using the HTTP protocol via a centralized API Gateway, which could provide a convenient way to monitor microservices without disruption to the microservices. In this thesis, we study non-intrusive approaches to monitor microservices and present our non-intrusive approach to monitor microservices for faults and performance issues by examining anomalies in HTTP requests transferring through a centralized API Gateway. Our approach requires minimal disruption and is easy to implement as it utilizes the API Gateway for monitoring. We implemented our approach in the Amazon Web Services (AWS) cloud environment to an existing software system to find real-world issues and challenges of applying our approach. Alarms were created by using anomaly detection capabilities provided by the AWS CloudWatch service. Our non-intrusive approach can monitor latency, traffic, and errors, but not saturation. Multiple incidents of interest were detected during our evaluation period. We present the challenges and issues we were faced with. In addition, we introduce guidelines and a library to further simplify the deployment of our approach.
-
(2023)Self-Sovereign Identity is a new concept of managaging digital identities in the digital services. The purpose of the Self-Sovereign Identity is to place the user in the center and move towards decentralized model of identity management. Verifiable Credentials, Verifiable Presentations, Identity Wallets and Decentralized Identifiers are part of the Self-Sovereign Identity model. They have also been recently included in the OpenID Connect specifications to be used with the widely used authentication layer built on OAuth 2.0. The OpenID Connect authentication can now be leveraged with the Decetralized Identifiers (DIDs) and the public keys contained in DID Documents. This work assessed the feasibility of integrating the Verifiable Credentials, Verifiable Presentations and Decentralized Identifiers with OpenID Connect in the context of two use cases. The first use case is to integrate the Verifiable Credentials and Presentations into an OpenID Connect server and utilise Single Sign-On in federated environment. The second use case is to ignore the OpenID Provider and enable the Relying Party to authenticate directly with the Identity Wallet. Custom software components, the Relying Party, the Identity Wallet and the Verifiable Credential Issuer were built to support the assessments. Two new authorization flows were designed for the two use cases. The Federated Verifiable Presentation Flow describes the protocol of Relying Party authenticating with OpenID Provider which receives the user information from the Wallet. The flow does not require any changes for any Relying Party using the same OpenID Provider to authenticate and utilise Single Sign-On. The Verifiable Presentation Flow enables the Relying Party to authenticate directly with the Wallet. However, this flow requires multiple changes to Relying Party and benefits of federated environment are not available, e.g., the Single Sign-On. Both of the flows are useful for their own specific use cases. The new flows are utilising the new segments of the Self-Sovereign Identity and are promising steps towards self-sovereignty.
-
(2023)Using password hashes for verification is a common way to secure users’ passwords against a potential data breach. The functions that are used to create these hashes have evolved and changed over time. Hackers and security researchers constantly try to find effective ways to derive the original passwords from these hashes. This thesis focuses on cryptographic hash functions that get passwords as inputs and on the different methods an attacker may use to deduce a password from a hash. The research questions for the thesis are: 1. What kind of password hashing techniques have evolved from the viewpoints of a defender and an attacker? 2. What kind of observations can be made when studying the implementations of the hashing algorithms and the tools that the attackers use against the hashes? The thesis examines some commonly used hash functions for passwords and common attack strategies that are used against them. Hash functions developed especially for passwords such as PBKDF2 and Scrypt will be explained. The password recovery tool Hashcat is introduced and different ways to use the tool against password hashes are demonstrated. Tests are done to show off differences in hash functions, as well as what kind of effect offensive and defensive techniques have against password hashes. These test results are explained and reviewed.
-
(2024)As a company's Information Technology (IT) solutions become more complex, the importance of good IT support systems increases. The versatility of IT solutions in production environments is constantly increasing, as the systems are very long-lasting and increasingly incorporate technology. Carefully built support capability is able to take over the environment comprehensively, and provide support for cooperation between different stakeholders. This thesis lays the groundwork for a case company to enhance the support capabilities of IT services in production environments. In order for a comprehensive take-over to be successful, it is important to first understand what are the elements of the environment and what are their relationships to each other. Tools supporting environmental technology mapping will be developed to understand what kind of support would be needed at each point in time. The methods developed, as well as the definition of past problem cases and solutions collected through interviews. Based on the problems and solutions detected, we can build a framework of standard solutions: what type of standard solutions would improve the production support. Based on the standards, we are able to create a responsibility matrix that identifies stakeholders for the development of the standards. The research explores extensively the structure of IT services in production environments and the benefits of standardizing different aspects. Our case company is a major player in the defense industry, allowing us to figure out what kind of special requirements and characteristics exist in this closely controlled sector.
-
(2023)Data centers provide a demanding and complex environment for networking as there is a need to provide fairness, throughput, and responsiveness while balancing great volumes of data and different types of flows. Programmable scheduling aims to make networking more flexible by providing capabilities for testing, modifying, and running a greater number of scheduling algorithms on switches than currently is possible. This is done by having a hardware design on top of which scheduling algorithms can be run as software. Over the years, multiple different abstractions for the switch scheduler have been suggested, with the aim of being capable of running at line rate. This thesis is a literature review of different programmable scheduler designs, focusing on Push-In First-Out, Push-In Extract-Out, Strict Priority Push-In First-Out, and Admission-In First-Out designs. This work provides an overview of the designs and their hardware implementations, observing their strengths and weaknesses regarding the data center environment. These designs are compared to one another with a focus on trade-offs between metrics like speed, expressiveness, and scalability, with a discussion on how these trade-offs ensure that there is currently no design that is above the others in all aspects.
-
(2024)Tiny Machine Learning (TinyML) has gained popularity in recent years as a way to deploy machine learning models on resource-constrained devices. Despite the increasing use of TinyML, its lifecycle management, which includes phases such as data processing, model optimization, conversion, and deployment, still necessitates a high degree of human involvement. Alongside these advancements, a new wave has revolutionized the AI landscape. Specifically, Large Language Models (LLMs) have demonstrated remarkable capabilities in various domains, including natural language processing and code generation. This thesis explores the potential of leveraging LLMs to streamline TinyML lifecycle management by proposing a novel natural language-based TinyML automation schema and prototyping a framework that integrates the GPT-4o LLM with existing TinyML tools. The system demonstrated the ability to automate key stages of the TinyML lifecycle, including data processing, model quantization and conversion, and deployment sketches, based on high-level descriptions and project requirements. This was illustrated through a fruit classification case study on an Arduino Nano 33 BLE board, showcasing the code generation capabilities. Results indicate that LLM-powered automation can greatly reduce development time and lower the entry barriers for TinyML development. The system shows promise in rapid prototyping and handling diverse requirements across the TinyML lifecycle. Future work should further enhance reliability, adaptability, and fine-grained control, particularly for complex applications. This research contributes to the emerging field of AI-assisted embedded system development. While LLMs are not yet production-ready for TinyML lifecycle management, they present significant potential to accelerate innovation and broaden access to TinyML development.
-
(2023)Energy usage and efficiency is an important topic in the area of cloud computing. It is estimated that around 10% of the world’s energy consumption goes towards the global ICT system [1]. One key aspect of the cloud is virtualization, which allows for the isolation and distribution of system resources through the use of virtual machines. In recent years, container technology, which allows for the virtualization of individual processes, has become a popular virtualization technique. However, there is limited research into the scalability of these containers from both an energy efficiency and system performance perspective. This thesis aims to investigate this issue through large-scale benchmarking experiments. Results of the benchmarking experiments indicate that not necessarily the total amount of containers, but the assigned task of each individual container are relevant when considering energy efficiency. Key findings show a link between latency measurements performed by individual containers and allocated CPU cores on the host machine, with additional CPU cores causing a drop in latency as the amount of containers increase. Further, power consumption seems to hit its peak when CPU utilisation is only at 50%, with additional CPU utilisation causing no increase in power consumption. Finally, RAM utilisation seems to scale linearly with the total amount of containers involved.
-
(2024)In motorsports, especially in car races, communication among the cars, pit crews, and the audience must meet diverse requirements. The racing cars and their drivers communicate extensively with pit crews, who monitor the vehicles to ensure proper mechanical status and driver safety with high-reliability communications. At the same time, broadcasters receive footage from the track to distribute it to the audience worldwide. The current 5G networks allow the creation of private networks, which can be a decent tool to serve motorsport events. Consequently, the devices on the private network can be isolated from the network traffic from the spectators. In this thesis, we plan to quantify and analyze the measurement of using the private and public LTE networks for a racing scenario. The work will involve simulating races in a network simula- tor. Specifically, mobility traces collected from an actual racing event will be used to simulate mobile nodes as the racing drivers. The fixed location applies to a pit crew, a broadcaster, and an audience, with the base station as an access point. The configuration of base station decides on the type of LTE network in the simulations, we measure each network model and evaluate the performance based on selected metrics. Finally, we discuss the discovery based on our simluations and the potential reasons behind them.
-
(2024)Zero Trust -turvallisuusmalli uudistaa tietoverkkojen tietoturva-ajattelua lähtemällä oletukses- ta, ettei mikään tietoverkon vyöhyke ole itsessään turvallinen. Tällöin myös luotettujen verk- kojen sisäisiä tietoliikenneyhteyksiä on tarkasteltava kriittisesti, ja ne on sisällytettävä harkin- nanvaraisen ja minimioikeuksiin perustuvan pääsynhallinnan piiriin. Käsite mikrosegmentointi on ymmärrettävä suhteessa verkon segmentointiin eli sen jakamiseen vyöhykkeisiin. Mikrosegmentti on niin mikroskooppinen vyöhyke, että se on enää yhden isän- täkoneen kokoinen. Mikrosegmentoinnissa jokainen tietoliikenneyhteys, myös kahden saman lähiverkon isäntäkoneen välinen, ylittää vyöhykerajan ja on pääsynvalvonnan alainen. Tässä työssä tutkitaan, että jos virtuaalisesta tietokoneluokasta Zero Trust -mallin mukaisesti sallitaan vain tunnetut yhteydet käyttämällä tähän mikrosegmentoivaa palomuuria, niin kuinka paljon tämä vähentää lähiverkon liikennettä ja estetäänkö samalla jotain olennaista? Teoriaosuudessa esitellään tietoliikenteen perusteet, Defense in Depth ja Zero Trust -käsitteet, palomuurien toimintaperiaatteet, tietokoneiden virtualisointi sekä datakeskusverkkojen toteu- tustapoja. Empiirisessä osuudessa analysoidaan lähiverkon sisäistä liikennettä Helsingin yliopiston etä- työpöytäympäristöstä, jota voi ajatella virtuaalisena etäkäytettävänä tietokoneluokkana. Etä- työpöytäympäristöstä syntyviä lateraalisia yhteyksiä analysoidaan kvantitatiivisesti vertaillen liikennemääriä täsmä-, ryhmä- ja yleislähetysluokissa sekä kvalitatiivisesti tarkastelemalla oh- jelmistoja näiden yhteyksien taustalla. Samalla etsitään vastausta kysymyksiin: mikä tarkoitus näillä yhteyksillä on ja ovatko ne tarpeellisia virtuaalisessa tietokoneluokassa. Tarpeettomien yhteyksien suodattamista pohditaan myös energiansäästön näkökulmasta.
Now showing items 1-20 of 20