Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by study line "Networking and Services"

Sort by: Order: Results:

  • Xue, Jiayue (2021)
    The semantic shifts in natural language is a well established phenomenon and have been studied for many years. Similarly, the meanings of scientific publications may also change as time goes by. In other words, the same publication may be cited in distinct contexts. To investigate whether the meanings of citations have changed in different scenarios, which is also called in the semantic shifts in citations, we followed the same ideas of how researchers studied semantic shifts in language. To be more specific, we combined the temporal referencing model and the Word2Vec model to explore the semantic shifts of scientific citations in two aspects: their usages over time and their usages across different domains. By observing how citations themselves changed over time and comparing the closest neighbors of citations, we concluded that the semantics of scientific publications did shift in terms of cosine distances.
  • Sarapalo, Joonas (2020)
    The page hit counter system processes, counts and stores page hit counts gathered from page hit events from a news media company’s websites and mobile applications. The system serves a public application interface which can be queried over the internet for page hit count information. In this thesis I will describe the process of replacing a legacy page hit counter system with a modern implementation in the Amazon Web Services ecosystem utilizing serverless technologies. The process includes the background information, the project requirements, the design and comparison of different options, the implementation details and the results. Finally, I will show how the new system implemented with Amazon Kinesis, AWS Lambda and Amazon DynamoDB has running costs that are less than half of that of the old one’s.
  • Walder, Daniel (2021)
    Cloud vendors have many data centers around the world and offer in each data center possibilities to rent computational capacities with different prices depending on the needed power and time. Most vendors offer flexible pricing, where prices can change hourly, for instance, Amazon Web Services. According to those vendors, price changes depend highly on the current workload. The more workload, the pricier it is. In detail, this paper is about the offered spot services. To get the most potential out of this flexible pricing, we build a framework with the name ELMIT, which stands for Elastic Migration Tool. ELMIT’s job is to perform price forecasting and eventually perform migrations to cheaper data centers. In the end, we monitored seven spot instances with ELMIT’s help. For three instances no migration was needed, because no other data center was ever cheaper. However, for the other four instances ELMIT performed 38 automatic migrations within around 42 days. Around 160$ were saved. In detail, three out of four instances reduced costs by 14.35%, 4.73% and 39.6%. The fourth performed unnecessary migrations and cost at the end more money due to slight inaccuracies in the predictions. In total, around 50 cents more. Overall, the outcome of ELMIT’s monitoring job is promising. It gives reason to keep developing and improving ELMIT, to increase the outcome even more.
  • Heinonen, Ava (2020)
    The design of instructional material affects learning from it. Abstraction, or limiting details and presenting difficult concepts by linking them with familiar objects, can limit the burden to the working memory and make learning easier. The presence of visualizations and the level to which students can interact with them and modify them also referred to as engagement, can promote information processing. This thesis presents the results of a study using a 2x3 experimental design with abstraction level (high abstraction, low abstraction) and engagement level (no viewing, viewing, presenting) as the factors. The study consisted of two experiments with different topics: hash tables and multidimensional arrays. We analyzed the effect of these factors on instructional efficiency and learning gain, accounting for prior knowledge, and prior cognitive load. We observed that high abstraction conditions limited study cognitive load for all participants, but were particularly beneficial for participants with some prior knowledge on the topic they studied. We also observed that higher engagement levels benefit participants with no prior knowledge on the topic they studied, but not necessarily participants with some prior knowledge. Low cognitive load in the pre-test phase makes studying easier regardless of the instructional material, as does knowledge on the topic being studied. Our results indicate that the abstractions and engagement with learning materials need to be designed with the students and their knowledge levels in mind. However, further research is needed to assess the components in different abstraction levels that affect learning outcomes and why and how cognitive load in the pre-test phase affects cognitive load throughout studying and testing.
  • Ture, Tsegaye (2021)
    The introductory section of the thesis discusses on the European General Data Protection Regulation, abbreviated GDPR, background information and historical facts. The second section covers basic concepts of personal data and GDPR enforcement. The third section gives detailed analysis on data subject rights as well as best practices for GDPR compliance to avoid penalties. The fourth section concentrates on the technical aspects of the right to be forgotten, solely concentrating on the technical aspects of permanent erasure/deletion of personal or corporate data in compliance with the customer’s desire. Permanent deletion or erasure of data, technically addressing the issue of the right to be forgotten and block chain network technology are the main focus areas of the thesis. The fifth section of the thesis profoundly elaborates block chain and the relation with GDPR compliance in particular. Then the thesis resumes explaining about security aspects and encryption, confidentiality, integrity and availability of data as well as authentication, authorization and auditing mechanisms in relation to the GDPR. The last section of the thesis is the conclusion and recommendation section which briefly summarizes the entire discussion and tries to suggest further improvements
  • Kone, Damian (2021)
    Computer systems are often distributed across a network to provide services to the end-users. These systems must be available and provide services to the users when required. For this reason, high availability system technologies have captured the attention of IT-organizations. Most companies consider it important to provide continuous services with minimal downtime to the end-users. The implementation of service availability is a complex task with multiple constraints, including security, performance, and system scalability. The first chapter of the thesis introduces the high availability system and the objectives of the thesis. In the second, third, fourth and fifth chapters, concepts, redundancy models, clusters and containers are described. In the sixth chapter, an approach to measure the availability of the components of IT-system using the Application Availability Measurement method is provided. The seventh and eighth chapters contain a case study. The seventh chapter contains actual backup system design overview, issues related to the current methods and tools to measure the availability used by a Finnish software company. In the eighth chapter, as part of the case study, a solution design is proposed based on the principle of service delivery decomposition into a set of measurement points for service level indicators. A plan is provided to show how to implement a method and provide tools to measure the availability of the backup system used by a Finnish software company.
  • Lumme, Iina (2021)
    Indoor localization in Smart factories encounters difficult conditions due to metallic environ- ments. Nevertheless, it is one of the enablers for the ongoing industrial revolution, Industry 4.0. This study investigates the usability of indoor localization in a real factory site by tracking hoist assembly process. To test the hypothesis that indoor localization works in a factory environment, an Ultra- Wideband Indoor Positioning System was installed to cover the hoist assembly space. The system followed hoist assembly trolleys for three weeks after which data was analysed by cal- culating assembly times. The results show that indoor localization with Ultra-Wideband technology is a working solution for industrial environments similar to the tested environment. The time calculations are more accurate than known standard times and reveal that hoist assemblies are not standard and there is wasted time. The results suggest that indoor localization is adaptable to industrial environments and to manufacturing processes. Analysing the processes through the position data provides new knowledge that is used for improving the productivity.
  • Rinta-Homi, Mikko (2020)
    Heating, ventilation, and air conditioning (HVAC) systems consume massive amounts of energy. Fortunately, by carefully controlling these systems, a significant amount of energy savings can be achieved. This requires detecting a presence or amount of people inside the building. Countless different sensors can be used for this purpose, most common being air quality sensors, passive infrared sensors, wireless devices, and cameras. A comprehensive review and comparison are done for these sensors in this thesis. Low-resolution infrared cameras in counting people are further researched in this thesis. The research is about how different infrared camera features influence counting accuracy. These features are resolution, frame rate and viewing angle. Two systems were designed: a versatile counting algorithm, and a testing system which modifies these camera features and tests the performance of the counting algorithm. The results prove that infrared cameras with resolution as low as 4x2 are as accurate as higher resolution cameras, and that frame rate above 5 frames per second does not bring any significant advantages in accuracy. Resolution of 2x2 is also sufficient in counting but requires higher frame rates. Viewing angles need to be carefully adjusted for best accuracy. In conclusion, this study proves that even the most primitive infrared cameras can be used for accurate counting. This puts infrared cameras in a new light since primitive cameras can be cheaper to manufacture. Therefore, infrared cameras used in occupancy counting become significantly more feasible and have potential for widespread adoption.
  • Lepola, Kimmo (2020)
    Latency is one of the key performance elements affecting the quality of experience (QoE) in computer games. Latency in the context of games can be defined as the time between the user input and the result on the screen. In order for the QoE to be satisfactory the game needs to be able to react fast enough to player input. In networked multiplayer games, latency is composed of network delay and local delays. Some major sources of network delay are queuing delay and head-of-line (HOL) blocking delay. Network delay in the Internet can be even in the order of seconds. In this thesis we discuss what feasible networking solutions exist for browser multiplayer games. We conduct a literature study to analyze the Differentiated Services architecture, some salient Active Queue Management (AQM) algorithms (RED, PIE, CoDel and FQ-CoDel), the Explicit Congestion Notification (ECN) concept and network protocols for web browser (WebSocket, QUIC and WebRTC). RED, PIE and CoDel as single-queue implementations would be sub-optimal for providing low latency to game traffic. FQ-CoDel is a multi-queue AQM and provides flow separation that is able to prevent queue-building bulk transfers from notably hampering latency-sensitive flows. WebRTC Data-Channel seems promising for games since it can be used for sending arbitrary application data and it can avoid HOL blocking. None of the network protocols, however, provide completely satisfactory support for the transport needs of multiplayer games: WebRTC is not designed for client-server connections, QUIC is not designed for traffic patterns typical for multiplayer games and WebSocket would require parallel connections to mitigate the effects of HOL blocking.
  • Toivonen, Kim (2022)
    Browser based 3D applications have become more popular since the introduction of the Web Graphics Library (WebGL). However, they have some unique characteristics, such as the inability to access the local file system and the requirement to be executed in the browser’s scripting environment. These characteristics can introduce performance bottlenecks, and WebGL applications are also vulnerable to the same bottlenecks as traditional 3D applications. In this thesis, we aim to provide guidelines for designing WebGL applications by conducting a background survey and creating a benchmarking platform. Our experiments showed that loading model data from the browser’s execution environment to the GPU has the biggest impact on performance. Therefore, we recommend focusing on minimizing the amount of data that needs to be added to the scene when designing 3D WebGL applications. Additionally, we found that the amount of data rendered affects the severity of performance drops when loading model data to the GPU, and suggest actively managing the scene by only including relevant data in the rendering pipeline.
  • Hyeongju, Lee (2021)
    The number of IoT and sensor devices is expected to reach 25 billion by 2030. Many IoT appli- cations, such as connected vehicle and smart factory that require high availability, scalability, low latency, and security have appeared in the world. There have been many attempts to use cloud computing for IoT applications, but the mentioned requirements cannot be ensured in cloud environments. To solve this problem, edge computing has appeared in the world. In edge environments, containerization technology is useful to deploy apps with limited resources. In this thesis, two types of high available Kubernetes architecture (2 nodes with an external DB and 3 nodes with embedded DB) were surveyed and implemented using K3s distribution that is suitable for edges. By having a few experiments with the implemented K3s clusters, this thesis shows that the K3s clusters can provide high availability and scalability. We discuss the limitations of the implementations and provide possible solutions too. In addition, we provide the resource usages of each cluster in terms of CPU, RAM, and disk. Both clusters need only less than 10% CPU and about 500MB RAM on average. However, we could see that the 3 nodes cluster with embedded DB uses more resources than the 2 nodes + external DB cluster when changing the status of clusters. Finally, we show that the implemented K3s clusters are suitable for many IoT applications such as connected vehicle and smart factory. If an application that needs high availability and scalability has to be deployed in edge environments, the K3s clusters can provide good solutions to achieve the goals of the applications. The 2 nodes + external DB cluster is suitable for the applications where the amount of data fluctuate often, or where there is a stable connection with the external DB. On the other hand, the 3 nodes cluster will be suitable for the applications that need high availability of the database even in poor internet connection. ACM Computing Classification System (CCS) Computer systems organization → Embedded and cyber-physical systems Human-centered computing → Ubiquitous and mobile computing
  • Lee, Hyeongju (2021)
    The number of IoT and sensor devices is expected to reach 25 billion by 2030. Many IoT appli- cations, such as connected vehicle and smart factory that require high availability, scalability, low latency, and security have appeared in the world. There have been many attempts to use cloud computing for IoT applications, but the mentioned requirements cannot be ensured in cloud environments. To solve this problem, edge computing has appeared in the world. In edge environments, containerization technology is useful to deploy apps with limited resources. In this thesis, two types of high available Kubernetes architecture (2 nodes with an external DB and 3 nodes with embedded DB) were surveyed and implemented using K3s distribution that is suitable for edges. By having a few experiments with the implemented K3s clusters, this thesis shows that the K3s clusters can provide high availability and scalability. We discuss the limitations of the implementations and provide possible solutions too. In addition, we provide the resource usages of each cluster in terms of CPU, RAM, and disk. Both clusters need only less than 10% CPU and about 500MB RAM on average. However, we could see that the 3 nodes cluster with embedded DB uses more resources than the 2 nodes + external DB cluster when changing the status of clusters. Finally, we show that the implemented K3s clusters are suitable for many IoT applications such as connected vehicle and smart factory. If an application that needs high availability and scalability has to be deployed in edge environments, the K3s clusters can provide good solutions to achieve the goals of the applications. The 2 nodes + external DB cluster is suitable for the applications where the amount of data fluctuate often, or where there is a stable connection with the external DB. On the other hand, the 3 nodes cluster will be suitable for the applications that need high availability of the database even in poor internet connection. ACM Computing Classification System (CCS) Computer systems organization → Embedded and cyber-physical systems Human-centered computing → Ubiquitous and mobile computing
  • Zhang, Yu (2022)
    The Internet of Things (IoT) aims at linking billions of devices using the internet and other heterogeneous networks to share information. However, the issues of security in IoT environments are more challenging than with ordinary Internet. A vast number of devices are exposed to the attackers, and some of those devices contain sensitive personal and confidential data. For example, the sensitive flows of data such as autonomous vehicles, patient life support devices, traffic data in smart cities are extremely concerned by researchers from the security field. The IoT architecture needs to handle security and privacy requirements such as provision of authentication, access control, privacy and confidentiality. This thesis presents the architecture of IoT and its security issues. Additionally, we introduce the concept of blockchain technology, and the role of blockchain in different security aspects of IoT is discussed through a literature review. In case study of Mirai, we explain how snort and iptables based approach can be used to prevent IoT botnet from finding IoT devices by port scanning.
  • Li, Jing (2020)
    Exploratory search is characterised by user uncertainty with respect to search domain and information seeking goals. This uncertainty can negatively impact users’ abilities to assess the quality of search results, causing them to scroll through more documents than necessary and struggle to give consistent relevance feedback. As users’ information needs are assumed to be highly dynamic and expected to evolve over time, successful searches can be indistinguishable from those that have drifted erroneously away from their original search intent. Indeed, given their lack of domain knowledge, searchers may be slow, or even unable, to recognise when search results have become skewed towards another topic. With these issues in mind, we designed and implemented an interactive search system which integrated a keyword summaries algorithm, Exploratory Search Captions (ESC) to support users in exploratory search. This thesis investigated into the usefulness of ESC in terms of user experience, user behaviour and also explored impact of design decision in terms of user satisfaction. We evaluated the ESC system with a user study in the context of exploratory search of scientific literature in Computer Science. According to the user study results, participants almost unanimously preferred the retrieval system that incorporated ESC; and the presence of captions dramatically impacts user behaviour: users issue more queries, investigate fewer documents per query, but see more documents overall. We demonstrated the usefulness of ESC, the improved usability of ESC system, and the positive impact of our design decisions.
  • Nepali, Santosh (2020)
    The business applications such as weather forecasting, traffic management, weather forecasting, traffic management, etc., are enormously adopting Internet of Things(IoT). While scaling of these applications are fast, the device/sensor capabilities, particularly in terms of battery life and energy efficiency is limited. Despite of intensive research conducted to address these shortcomings, Wireless IoT Sensor Network(WIoTSN) still cannot assure 100\% efficient network life. Therefore, the core objective of the thesis is to provide an overview of energy efficiency of proactive(OLSR) and reactive(DSR and AODV) data routing protocols by scaling the size of network, i.e. number of sensor nodes, data packet size, data transmission rate and speed of mobile sink node. It also reviews the importance of security in WIoTSN. The two approaches, such as literature review and simulation testing, are used to achieve the objective of the thesis. The literature review provides information about reactive and proactive protocols and their mechanism for route discovery. Similarly, the network simulator tool NS3 is used for running simulation to evaluate the performance of selected routing protocols for energy efficiency. The thesis results showed the effect of scaling the parameters selected for experimental purpose on the energy efficiency of proactive and reactive data routing protocols. The simulation results prove that the reactive protocol DSR outperforms another reactive protocol AODV and proactive protocol OLSR in energy efficiency. From the security perspective, the thesis also emphasizes its need in IoT and suggest to minimize wasteful resources in WIoTSN and use them by restructuring the network for secure energy-efficient data routing protocols.
  • Daubaris, Paulius (2021)
    Designing software for a variety of execution environments is a difficult task. This is due to a multitude of device-specific features that must be taken into account. Hence, it is often difficult to determine all the available features and produce a single piece of software covering the possible scenarios. Moreover, with varying resources available, monolithic applications are often hardly suitable and require to be modularized while still providing all the necessary features of the original application. By employing units of deployment, such as components, it is possible to retrieve required functionality on-demand, thus adapting to the environment. Adaptivity has been identified as one of the main enablers that allow leveraging offered capabilities while reducing the complexity related to software development. In this thesis, we produced a proof-of-concept (PoC) implementation leveraging WebAssembly modules to assemble applications and adapt to a particular execution environment. Adaptation is driven by the information contained in metadata files. Modules are retrieved on-demand from one or more repositories based on the characteristics of the environment and integrated during execution using dynamic linking capabilities. We evaluate the work by considering what is the impact of modular WebAssembly applications and compare them to standard monolithic WebAssembly applications. In particular, we investigate startup time, application execution time, and overhead introduced by the implementation. Finally, we examine the limitations of both, the used technology and the implementation, and provide ideas for future work.
  • Hyötyläinen, Annamaria (2023)
    Security of web communication is crucial. When accessing an online bank, for example, one of the key issues is that the user can be assured they are communicating with the bank as they intended to. This assurance is achieved with the public key infrastructure for the Web, Web PKI. Its purpose is to manage digital certificates on the Web. Certificates are used to prove one’s identity with the help of public-key cryptography. Certificate authorities and software vendors that operate certificate root stores have key roles in the Web PKI. The first issue certificates and the latter choose which CAs are trusted. The Web PKI has multiple challenges and it is a highly researched topic. Numerous countermeasures and enhancements to the Web PKI have been developed over the years. This thesis investigates challenges in the Web PKI and proposed countermeasures, some of which are based on blockchain technology. Of the non-blockchain-based solutions, we introduce Certificate Transparency, CAA and DANE. Of the blockchain-based solutions, CertLedger, IKP and a solution for decentralising ACME protocol are described. We find that the challenges are mainly related to certificate authorities, revocation and root stores. From the numerous solutions, Certificate Transparency and CAA are utilised in the Web PKI. Shortening the validity period of certificates can resolve some of the challenges. Blockchain-based solutions are numerous but none has yet seen wide deployment.
  • Juslenius, Santeri (2021)
    The Streamr Network is a decentralized publish-subscribe system. This thesis experimentally compares WebSocket and WebRTC as transport protocols in the system’s d-regular random graph type unstructured stream overlays. The thesis explores common designs for publish-subscribe and decentralized P2P systems. Underlying network protocols including NAT traversal are explored to understand how the WebSocket and WebRTC protocols function. The requirements set for the Streamr Network and how its design and implementations fulfill them are discussed. The design and implementations are validated with the use simulations, emulations and AWS deployed real-world experiments. The performance metrics measured from the real-world experiments are compared to related work. As the implementations using the two protocols are separate incompatible versions, the differences between them was taken into account during analysis of the experiments. Although the WebSocket versions overlay construction is known to be inefficient and vulnerable to churn, it is found to be unintentionally topology aware. This caused the WebSocket stream overlays to perform better in terms of latency. The WebRTC stream overlays were found to be more predictable and more optimized for small payloads as estimates for message propagation delays had a MEPA of 1.24% compared to WebSocket’s 3.98%. Moreover, the WebRTC version enables P2P connections between hosts behind NATs. As the WebRTC version’s overlay construction is more accurate, reliable, scalable, and churn tolerant, it can be used to create intentionally topology aware stream overlays to fully take over the results of the WebSocket implementation.