Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by department "Tietojenkäsittelytieteen osasto"

Sort by: Order: Results:

  • Heino, Lauri (2020)
    The suffix array is a space-efficient data structure that provides fast access to all occurrences of a search pattern in a text. Typically suffix arrays are queried with algorithms based on binary search. With a pre-computed index data structure that provides fast access to the relevant suffix array interval, querying can be sped-up, because the binary search process operates over a smaller interval. In this thesis a number different ways of implementing such an index data structure are studied, and the performance of each implementation is measured. Our experiments show that with relatively small data structures, one can reduce suffix array query times by almost 50%. There is a trade-off between the size of the data structure and the speed-up potential it offers.
  • Röyskö, Visa (2020)
    Bottiverkot ovat sellaisten laitteiden verkkoja, jota on saastutettu haittaohjelmalla. Verkon hallitsija voi antaa näille koneille komentoja ja komentaa ne tekemään hyökkäyksiä. Hyökkäyksiä ovat muun muassa hajautetut palvelunestohyökkäykset ja roskapostin lähettäminen. Tässä opinnäytetyössä verrataan kolmea erilaista bottiverkko-ohjelmistoa. Vertailussa käydään läpi niiden topologiaa, sekä erityispiirteitä. Lopuksi käydään läpi erilaisia tapoja bottiverkoilta suojautumiseen.
  • Karikoski, Antti (2019)
    Data compression is one way to gain better performance from a database. Compression is typically achieved with a compression algorithm, an encoding or both. Effective compression directly lowers the physical storage requirements translating to reduced storage costs. Additionally, in case of a data transfer bottleneck where CPU is data starved, compression can yield improved query performance through increased transfer bandwidth and better CPU utilization. However, obtaining better query performance is not trivial since many factors affect the viability of compression. Compression has been found especially successful in column oriented databases where similar data is stored closely in physical media. This thesis studies the effect of compression on a columnar storage format Apache Parquet through a micro benchmark that is based on the TPC-H benchmark. Compression is found to have positive effects on simple queries. However, with complex queries, where data scanning is relatively small portion of the query, no performance gains were observed. Furthermore, this thesis examines the decoding performance of the encoding layer that belongs to a case database, Fastorm. The goal is to determine its efficiency among other encodings and whether it could be improved upon. Fastorm's encoding is compared against various encodings of Apache Parquet in a setting where data is from a real world business. Fastorm's encoding is deemed to perform well enough coupled with strong evidence to consider adding delta encoding to its repertoire of encoding techniques.
  • Soyoye, Fiyinfoluwa (2020)
    Voice-Based Proactive Information Retrieval can support social interactions by augmenting conversations and removing the need for explicit search activity. Previous work introduces the SearchBot, a proactive search agent used to collect data about search behaviour during social interactions. Although prior analyses show that it positively influences the conversations of its users, the research leaves a gap in understanding how it affects their other behaviours. This thesis aims to bridge this gap by analyzing data from a previous study and characterizing the influence of the SearchBot on the behaviours and activities of its users. Our findings show that study participants displayed an increased frequency of engagement with the SearchBot system than with a more traditional search system. In addition, our exploration of the different types of search activities that users perform shows that SearchBot users are able to avoid the most cognitively expensive one (query formulation and typing). The findings also reveal patterns of interaction between the SearchBot system and its users in terms of speech patterns and search behaviours. We discuss the implications of our findings and provide suggestions for future work.
  • Pulkkinen, Markku (2020)
    Organizations are adopting cloud technologies at an increasing rate. Significant share of growth of cloud deployments is coming from application migrations to cloud computing. Migrating existing legacy applications to cloud computing platform is not a trivial task. A migration methodology will help migrating applications to cloud more effectively and with lower risk than doing it by trial and error. A part of the cloud migration process is the selection and execution of a migration strategy amongst the possible, situational and commonly used options. The migration strategy defines many of the migration process activities since they depend on cloud architecture and service and deployment models, which are implicitly set by the migration strategy. Many of the existing cloud migration methods don’t specify the factors that lead to migration strategy selection. The migration strategy selection is a critical part of migration planning involving multiple organisations and several individuals. This thesis presents categories of migration strategy factors derived from a cloud migration methodology and process framework review and validates the factors by doing a deductive thematic analysis against qualitative case study interview data. By having a clarity and a way to address the migration strategy factors, will increase the migration success rate and reduce planning time.
  • Malmivirta, Titti (2020)
    Continuous thermal imaging is a way to measure psycho-physiological signals in humans. Psycho-physiological signals refer to physical signals caused by some psychological or mental situation or change. One of the technical challenges with the thermal psycho-physiological signal measurement is that the devices used to measure the temperature changes need to be sensitive and accurate enough to actually detect them. This is generally true for laboratory equipment, but the current relatively cheap and small mobile devices, including uncooled thermal cameras, could be used to make this kind of measurements cheaper, less intrusive and mobile. Currently the customer-priced mobile devices still tend to produce a lot of noise and other inaccuracies to measurements. The focus of this thesis is to evaluate the usefulness of the FLIR One thermal camera integrated in the Caterpillar Cat S60 phone for cognitive load measurement and the possibility to improve the measurement accuracy with additional calibration correction. We developed a deep learning based calibration correction method as an attempt to improve the quite noisy initial measurements of the thermal camera. Then an experiment measuring cognitive load was organised. The calibration correction method was used to reduce errors in the data from the cognitive load experiment to see if the performance of the thermal camera can be improved enough for accurate cognitive load detection. Our results show that while our calibration correction method does improve the measurement accuracy when compared to the ground truth, the fluctuations in the measurements do not decrease enough to improve the performance of the thermal camera with regards to the cognitive load sensing.
  • Hannikainen, Jaakko (2020)
    Ohjelmointikielen valinta on tärkeä osa ohjelmistoprojektien toteutusta. Vaikka ohjelmointikielet uudistuvat nopeaan tahtiin, nykypäivänä on yhä tavallista valita ohjelmiston toteutukseen C-ohjelmointikieli, joka on standardoitu yli 30 vuotta sitten. Tutkielmassa tutkitaan syitä, miksi C on nykypäivänä vieläkin laajassa käytössä uudempien ohjelmointikielten sijaan. Tutkielmassa C:hen verrattaviksi ohjelmointikieliksi valitaan Ada, C++, D, Go sekä Rust. Kaikki viisi kieltä ovat tehokkaita. Tämän lisäksi jokaisen kielten historiassa on ollut tavoitteena korvata C:n käyttö. Ohjelmointikieliä verrataan C:hen suorituskyvyn, muistinkäytön sekä C-yhteensopivuuden osalta. Tämän lisäksi tutkielmassa selvitetään tärkeimpiä C:n ominaisuuksia sekä C:n kehitettävissä olevia ominaisuuksia. Tuloksia käytetään uuden Purkka-ohjelmointikielen suunnitteluun. Muut ohjelmointikielet todetaan suorituskykymittauksissa C:tä hitaammiksi. Tämän lisäksi muiden ohjelmointikielten ominaisuudet, kuten automaattisen muistinhallinnan, todetaan aiheuttavan ongelmia C-yhteensopivuudelle. C:n tärkeimmiksi ominaisuuksiksi nousevat esiin yksinkertaisuus, tehokkuus sekä alustariippumattomuus. Nämä ominaisuudet otetaan huomioon Purkka-kielen suunnittelussa, jossa painotetaan näiden lisäksi yhteensopivuutta C-ohjelmointikielen kanssa. Tutkielmaa varten kehitetty Purkka-kieli on suunniteltu C:n kaltaiseksi ohjelmointikieleksi, jossa on muutettu C:n syntaksia yksinkertaisemmaksi ja johdonmukaisemmaksi. Suorituskykymittauksissa todetaan, että Purkan muutokset C:hen eivät aiheuta suoritusaikaisia rasitteita. Koska Purkka-kieli käännetään C:ksi, se on mahdollisimman yhteensopiva nykyisten kääntäjien kanssa.
  • Lehtonen, Samuli Johannes (2020)
    Online gaming is more popular than ever and many video game companies are reliant on the cash flow generated by online games. If a video game company wants its game to be successful, the game has to be resilient against cheating, the presence of which can ruin an otherwise successful game. Cheating in a video game can bankrupt an entire company as the non-cheating players leave the game because of unscrupulous individuals using cheats to gain an unfair advantage. Cheating can also involve criminal activity where maliciously acquired in-game items are traded against real money online. Commercial cheat programs are sold on online black markets and are available even to players who have no deep technical knowledge. The widespread availability and easy accessibility of cheats compounds the issue. This thesis will categorize different anti-cheat techniques and give a brief history of anti-cheat starting from the early 1980s. The history section describes how the fight against online cheating began and how it has evolved over the years. This thesis will compare different anti-cheat methods, both on the client-side and server-side, and draw conclusions about their viability. It will also look at scenarios where different anti-cheat methods are combined to create more powerful systems. All the anti-cheat methods will be evaluated based on five different criteria on a scale of 1 to 4, with one being the lowest score and four the highest. The thesis will use a custom-built client-server game as an example to illustrate many of the anti-cheat techniques. Requirements of different types of games, such as first-person shooters and strategy games, will also be considered when reviewing the anti-cheat techniques. Lastly, the thesis will look into the future of anti-cheat and introduce video game streaming and the use of machine learning as possible new solutions to tackle cheating. The conclusion will summarize the advantages and disadvantages of different methods and show which techniques are preferable based on the analysis.
  • Raitahila, Iivo (2019)
    The Internet of Things (IoT) consists of physical devices, such as temperature sensors and lights, that are connected to the Internet. The devices are typically battery powered and are constrained by their low processing power, memory and low bitrate wireless communication links. The vast amount of IoT devices can cause heavy congestion in the Internet if congestion is not properly addressed. The Constrained Application Protocol (CoAP) is an HTTP-like protocol for constrained devices built on top of UDP. CoAP includes a simple congestion control algorithm (DefaultCoAP). CoAP Simple Congestion Control/Advanced (CoCoA) is a more sophisticated alternative for DefaultCoAP. CoAP can also be run over TCP with TCP's congestion control mechanisms. The focus of this thesis is to study CoAP's congestion control. Shortcomings of DefaultCoAP and CoCoA are identified using empirical performance evaluations conducted in an emulated IoT environment. In a scenario with hundreds of clients and a large buffer in the bottleneck router, DefaultCoAP does not adapt to the long queuing delay. In a similar scenario where short-lived clients exchange only a small amount of messages, CoCoA clients are unable to sample a round-trip delay time. Both of these situations are severe enough to cause a congestion collapse, where most of the link bandwidth is wasted on unnecessary retransmissions. A new retransmission timeout and congestion control algorithm called Fast-Slow Retransmission Timeout (FASOR) is congestion safe in these two scenarios and is even able to outperform CoAP over TCP. FASOR with accurate round-trip delay samples is able to outperform basic FASOR in the challenging and realistic scenario with short-lived clients and an error-prone link.
  • Kinnunen, Petri (2020)
    Computer programs are common in our daily environment and new robotic devices emerge constantly. One of the key skills for the future is understanding the general purpose as well as intricate details of both concepts. Programs written in conventional programming languages are purely text with syntax cryptic to an unexperienced observer and errors can easily be made in the writing process, which is far from ideal basis to introduce programming or robotics to children on elementary school level. We set to investigate how a visual learning environment could reduce this chasm between the skills of the young and the perceived difficulty of computer technology and with what assortment of technologies and working methods such platforms, both programming and robotic, could be produced in a project with multiple parties. A joint endeavor was started between two research groups in different universities and the education services of a municipality in Finland. In the project, a visual programming environment was created where the users can assemble programs by stacking and manipulating colorful blocks to form a command flow. The set of functions are traditional with calculations, variables and loops, but also include commands to pilot a robotic platform with wheels, arms and a facial expression indicating display. This robot is a product of the project also, based on a commercial design. The venture can be concluded a success. The learning environment and the roving robot were created in the course of the project and architectural design and selected technologies proved sensible. The empirical exploration accumulated knowledge for academic research.
  • Davis, Keith III (2020)
    We study the use of data collected via electroencephalography (EEG) to classify stimuli presented to subjects using a variety of mathematical approaches. We report an experiment with three objectives: 1) To train individual classifiers that reliably infer the class labels of visual stimuli using EEG data collected from subjects; 2) To demonstrate brainsourcing, a technique to combine brain responses from a group of human contributors each performing a recognition task to determine classes of stimuli; 3) To explore collaborative filtering techniques applied to data produced by individual classifiers to predict subject responses for stimuli in which data is unavailable or otherwise missing. We reveal that all individual classifier models perform better than a random baseline, while a brainsourcing model using data from as few as four participants achieves performance superior to any individual classifier. We also show that matrix factorization applied to classifier outputs as a collaborative filtering approach achieves predictive results that perform better than random. Although the technique is fairly sensitive to the sparsity of the dataset, it nonetheless demonstrates a viable proof-of-concept and warrants further investigation.
  • Alcantara, Jose Carlos (2020)
    A recent machine learning technique called federated learning (Konecny, McMahan, et. al., 2016) offers a new paradigm for distributed learning. It consists of performing machine learning on multiple edge devices and simultaneously optimizing a global model for all of them, without transmitting user data. The goal for this thesis was to prove the benefits of applying federated learning to forecasting telecom key performance indicator (KPI) values from radio network cells. After performing experiments with different data sources' aggregations and comparing against a centralized learning model, the results revealed that a federated model can shorten the training time for modelling new radio cells. Moreover, the amount of transferred data to a central server is minimized drastically while keeping equivalent performance to a traditional centralized model. These experiments were performed with multi-layer perceptron as model architecture after comparing its performance against LSTM. Both, input and output data were sequences of KPI values.
  • Viding, Jasu (2020)
    A cluster of containerized workloads is a complex system where stacked layers of plugins and interfaces can quickly hide what’s actually going on under the hood. This can result in incorrect assumptions, security incidents, and other disasters. With a networking viewpoint, this paper dives into the Linux networking subsystem to demystify how container networks are built on Linux systems. This knowledge of "how" allows then to understand the different networking features of Kubernetes, Docker, or any other containerization solution developed in the future.
  • Reunamo, Antti (2020)
    Popularity of mobile instant messaging applications has flourished during the last ten years, and people are using them to exchange private and personal information on daily basis. These applications can be freely installed from online marketplaces, and average users may have several of them installed on their devices. The amount of information available from these messaging applications for a third-party eavesdropper via network traffic analysis has therefore grown significantly as well. Security features of these applications have also been developing over the years, and the communication between the applications and the background server infrastructure nowadays practically always employs encryption. Recently, more advanced end-to-end encryption methods have been developed to hide the content of the exchanged data even from the messaging service providers. Machine learning techniques have successfully been utilized in analyzing encrypted network traffic, and previous research has shown that this approach can effectively be used to detect mobile applications and the actions users are performing in those applications regardless of encryption. While the actual content of the messages and other transferred data cannot be accessed by the eavesdropper, these methods can still lead to serious privacy compromises. This thesis discusses the present state of machine learning-based identification of applications and user actions, how feasible it would be to actually perform such detection in a Wi-Fi network and what kind of privacy concerns would arise.
  • Roy, Suravi Saha (2020)
    A global pandemic, COVID-19 began in December 2019 in Wuhan, China. Since then it has expanded all around the globe and was declared a global pandemic in early March by the World Health Organization (WHO). Ever since this pandemic started, the number of infections grew exponentially. Currently, there is a global rise in COVID-19 cases with 3.6 million new cases and new deaths with a weekly growth of 21%. The disease outbreak caused over 55.6 million infected cases and more than 1.34 million deaths worldwide since the beginning of this pandemic. Reverse transcription polymerase chain reaction (RT-PCR) test is the best protocol currently in use to detect COVID-19 positive patients. In a setup with low resources especially in developing countries with huge populations, RT-PCR test is not always a viable option for being expensive, time-consuming and it requires trained professionals. With the overwhelming number of infected cases, there is a significant need for a substitute that is cheaper, faster and accessible. In that regard, machine learning classification models are developed in this study to detect COVID-19 positive patients and predict the patient deterioration in the presence of missing data using a dataset published by hospital Israelita Albert Einstein, at São Paulo, Brazil. The dataset consists of 5644 anonymous patient samples who visited the hospital and tested for RT-PCR along with additional laboratory test results providing 111 clinical features. Additionally, there are more than 90% missing values in this dataset. To explore missing data analysis on COVID-19 clinical data, a comparison between a complete case analysis and imputed case analysis is reported in this study. It is established that the logistic regression model with multivariate imputations by chained equations (MICE) on the data, provides 91% and 85% sensitivity respectively for detecting COVID-19 positive patients and predicting the patient deterioration. The area under the receiver operating characteristics curve (AUC) score is reported at 93% and 89% for both tasks respectively. Sensitivity and AUC scores are selected for evaluating the model’s performance as false negatives are harmful for patient screening and triaging. The proposed pipeline is an alternative approach towards COVID-19 diagnosis and prognosis. Clinicians can employ this pipeline for early screening of COVID-19 suspected patients, triaging the medical procedures and as a secondary diagnostic tool for deciding patient’s priority for treatments by utilizing low-cost, readily available laboratory test results.
  • Kuisma, Ilkka (2019)
    Context: The advent of Docker containers in 2013 provided developers with a way of bundling code and its dependencies into containers that run identically on any Docker Engine, effectively mitigating platform and dependency related issues. In recent years an interesting trend has emerged of developers attempting to leverage the benefits provided by the Docker container platform in their development environments. Objective: In this thesis we chart the motivations behind the move towards Containerized Development Environments (CDE) and seek to categorize claims made about benefits and challenges experienced by developers after their adoption. The goal of this thesis is to establish the current state of the trend and lay the groundwork for future research. Methods: The study is structured into three parts. In the first part we conduct a systematic review of gray literature, using 27 sources acquired from three different websites. The sources were extracted for relevant quotes that were used for creating a set of higher level concepts for expressed motivations, benefits, and challenges. The second part of the study is a qualitative single-case study where we conduct semi-structured theme interviews with all members of a small-sized software development team that had recently taken a containerized development environment into use. The case team was purposefully selected for its practical relevance as well as convenient access to its members for data collection. In the last part of the study we compare the transcribed interview data against the set of concepts formed in the literature review. Results: Cross-environment consistency and a simplified initial setup driven by a desire to increase developer happiness and productivity were commonly expressed motivations that were also experienced in practice. Decreased performance, required knowledge of Docker, and difficulties in the technical implementation of CDE’s were mentioned as primary challenges. Many developers experienced additional benefits of using the Docker platform for infrastructure provisioning and shared configuration management. The case team additionally used the CDE as a platform for implementing end to end testing, and viewed the correct type of team and management as necessary preconditions for its successful adoption. Conclusions: CDE’s offer many valuable benefits that come at a cost and teams have to weigh the trade-off between consistency and performance, and whether the investment of development resources to its implementation is warranted. The use of the Docker container platform as an infrastructure package manager could be considered a game-changer, enabling development teams to provision new services like databases, load-balancers and message brokers with just a few lines of code. The case study reports one account of an improved onboarding experience and points towards an area for future research. CDE’s would appear to be a good fit for microservice oriented teams that seek to foster a DevOps culture, as indicated by the experience of the case team. The implementation of CDE’s is a non-trivial challenge that requires expertise from the teams and developers using them. Additionally, the case team’s novel use of containers for testing appears to be an interesting research topic in its own right. ACM Computing Classification System (CCS): Software and its engineering →Software creation and management →Software development techniques
  • Kärkkäinen, Satu (2020)
    Digitalisaation myötä kokoelmia on alettu digitoida ja palveluja välittää verkossa. COVID-19- pandemia ajoi museot uuteen tilanteeseen, kun tilat jouduttiin sulkemaan kävijöiltä. Digitaaliset keinot ja kanavat on otettu osaksi yleisötyötä. Yleisötyön tarjoaminen useita kanavia pitkin luo lisäarvoa ja mahdollistaa saavutettavuuden ajasta ja paikasta riippumatta. Tutkimus on itsessään arvokas tapaustutkimus, joka kohdistuu Amos Rexiin ja Helsingin kaupunginmuseoon. Museoilta voidaan oppia, kuinka digitaalista yleisötyötä on mahdollista harjoittaa. Tulokset voivat kannustaa testaamaan uusia menetelmiä ja toteuttamaan uusia digitaalisia tuotteita. Aiemmat tutkimukset ovat kartoittaneet yleisesti digitaalisen yleisötyön käytäntöjä ja haasteita. Tässä tutkimuksessa selvitetään, miten museoissa käytetään digitaalista kuvasisältöä sekä millä välineillä ja mitä kanavia käyttäen sitä esitetään. Tutkimus on rajattu käsittelemään seuraavia yleisötyön toiminnan alueita: näyttelytoiminnan täydentäminen organisaation tiloissa, näyttelytoimintaa täydentävän sisällön tuotanto ja jakaminen verkossa ja avointen digitaalisten aineistojen muokkaaminen tai rikastaminen. Keinojen ja kanavien pohjalta tarkasteltiin ISO / IEC 25010 laatumallia käyttäeen, millä perusteella kyseisiin ratkaisuihin on päädytty. Tutkimuksen aineistona toimivat museoissa esillä olleet näyttelyt, museoiden verkkojulkaisut ja haastattelu. Museoilla on useampi digitaalinen kanava käytössä. Helsingin kaupunginmuseo hyödyntää digitaalisia keinoja Amos Rexiä enemmän. Haasteena on, että tekijät ja museo eivät kohtaa. Avoimien aineistojen pariin yritetään houkutella erilaisten tapahtumien ja haasteiden avulla. Digitaalisia välineitä ja keinoja valitessa käyttäjän näkökulmasta tärkeimmät kriteerit ovat käytettävyys ja tehokkuus, museon näkökulmasta ylläpidettävyys ja siirrettävyys.
  • Aalto, Iiro (2020)
    Slack is an instant messaging platform intended for the internal communications of companies and other organizations. For organizations that use Slack extensively it may provide an interesting source of insight, but as such the data is difficult to analyze. Topic modeling, primarily latent Dirichlet allocation (LDA), is commonly used to summarize textual data in a meaningful way. Instant messages tend to be very short, which causes problems for conventional topic modeling methods such as LDA. The data sparsity problem can be tackled with data expansion and data combination techniques. For instant messages, data combination is particularly attractive as the messages are not independent of each other, but form implicit, and sometimes expicit, threads as the participants reply to each other. Most of the threads in the Slack data are not explicit, but must be ’untangled’ from the message stream if they are to be used as a basis for a data combination scheme. In this thesis we study the possibility of detecting implicit threads from a slack message stream and leveraging the threads as a data combination scheme in topic modeling. The threads are detected using a hierarchical clustering algorithm which uses word mover’s distance, latent semantic analysis, and metadata to compute the distances between messages. The clusters are then concatenated and used as the input for LDA. It is shown that on a dataset gathered from the Gofore Oyj Slack workspace, the cluster-based model improves on the message-based model, but falls short of being practical.
  • Hansson, Kristian (2019)
    Reunalaskennan tarkoituksena on siirtää tiedonkäsittelyä lähemmäs tiedon lähdettä, sillä keskitettyjen palvelinten laskentakyky ei riitä tulevaisuudessa kaiken tiedon samanaikaiseen analysointiin. Esineiden internet on yksi reunalaskennan käyttötapauksista. Reunalaskennan järjestelmät ovat melko monimutkaisia ja vaativat yhä enemmän ketterien DevOps-käytäntöjen soveltamista. Näiden käytäntöjen toteuttamiseen on löydettävä sopivia teknologioita. Ensimmäiseksi tutkimuskysymykseksi asetettiin: Millaisia teknisiä ratkaisuja reunalaskennan sovellusten toimittamiseen on sovellettu? Tähän vastattiin tarkastelemalla teollisuuden, eli pilvipalveluntarjoajien ratkaisuja. Teknisistä ratkaisuista paljastui, että reunalaskennan sovellusten toimittamisen välineenä käytetään joko kontteja tai pakattuja hakemistoja. Reunan ja palvelimen väliseen kommunikointiin hyödynnettiin kevyitä tietoliikenneprotokollia tai VPN-yhteyttä. Kirjallisuuskatsauksessa konttiklusterit todettiin mahdolliseksi hallinnoinnin välineeksi reunalaskennassa. Ensimmäisen tutkimuskysymyksen tuloksista johdettiin toinen tutkimuskysymys: Voiko Docker Swarmia hyödyntää reunalaskennan sovellusten operoinnissa? Kysymykseen vastattiin empiirisellä tapaustutkimuksella. Keskitetty reunalaskennan sovellusten toimittamisen prosessi rakennettiin Docker Swarm -konttiklusteriohjelmistoa, pilvipalvelimia ja Raspberry Pi -korttitietokoneita hyödyntäen. Toimittamisen lisäksi huomioitiin ohjelmistojen suorituksenaikainen valvonta, edellisen ohjelmistoversion palautus, klusterin laitteiden ryhmittäminen, fyysisten lisälaitteiden liittäminen ja erilaisten suoritinarkkitehtuurien mahdollisuus. Tulokset osoittivat, että Docker Swarmia voidaan hyödyntää sellaisenaan reunalaskennan ohjelmistojen hallinnointiin. Docker Swarm soveltuu toimittamiseen, valvontaan, edellisen version palauttamiseen ja ryhmittämiseen. Lisäksi sen avulla voi luoda samaa ohjelmistoa suorittavia klustereita, jotka koostuvat arkkitehtuuriltaan erilaisista suorittimista. Docker Swarm osoittautui kuitenkin sopimattomaksi reunalaitteeseen kytkettyjen lisälaitteiden ohjaamiseen. Teollisuuden tarjoamien reunalaskennan ratkaisujen runsas määrä osoitti laajaa kiinnostusta konttien käytännön soveltamiseen. Tämän tutkimuksen perusteella erityisesti konttiklusterit osoittautuivat lupaavaksi teknologiaksi reunalaskennan sovellusten hallinnointiin. Lisänäytön saamiseksi on tarpeen tehdä laajempia empiirisiä jatkotutkimuksia samankaltaisia puitteita käyttäen.