Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by department "Tietojenkäsittelytieteen osasto"

Sort by: Order: Results:

  • Toivanen, Aleksi (2020)
    Selainten kehittyessä ohjelmointiympäristönä, keskeiseksi hahmonnuskohteeksi on vakiintunut dokumenttioliomalli. Dokumenttioliomalli mahdollistaa dokumentin tehokkaan hallinnoimisen, mutta toisaalta sen käyttö vaatii syvällistä tuntemusta, eivätkä kaikki sen tarjoamat ohjelmointirajapinnat ole vailla ongelmia – dokumenttioliomallia on helppo käyttää tehottomasti ja sen tietyt osat ovat virheherkkiä. Näihin haasteisiin on vastattu tarjoamalla helpommin hallittavia, korkeamman tason ohjelmointiympäristöjä ja -rajapintoja. Dokumenttioliomalliin liittyvien haasteiden keskeinen asema selainpohjaisessa sovelluskehityksessä on nähtävissä dokumenttioliomalliin hahmontavien sovelluskehysten ja kirjastojen suuressa määrässä. Ratkaisut ja lähestymistavat dokumenttioliomallin haasteisiin vaihtelevat suuresti. Tässä työssä selvitetään miten hahmontavat sovelluskehykset, kirjastot ja kielet lähestyvät dokumenttioliomalliin hahmontamiseen liittyviä ongelmia, ja miten paljon nämä ratkaisut ovat sidottuja juuri dokumenttioliomalliin hahmontamiskohteena. Työssä käsitellään seitsemää analysoitua dokumenttioliomalliin hahmontavaa sovelluskehystä, kirjastoa ja kieltä, jotka on valikoitu niiden käyttämien erilaisten lähestymistapojen ja menetelmien perusteella. Ne pyrkivät abstrahoimaan kehittäjän pääsyä todelliseen dokumenttioliomalliin, näin estäen yleisimpiä virhetilanteita, ja piilottaen hahmontamistaan. Työ avaa analysoitujen sovelluskehysten ja kirjastojen toimintaa, ja kuvaa niiden tekemiä keskeisiä ratkaisuja. Yksittäisten sovelluskehysten analyysistä ei kuitenkaan suoraan pysty yleistämään dokumenttioliomalliin hahmontavien sovelluskehysten toiminnasta analyysin ulkopuolisten sovelluskehysten osalta, mutta tehty analyysi antaa viitteitä vallalla olevista yleisemmistä suuntauksista. Sovelluskehyksien analyyseistä on löydettävissä yhteisiä piirteitä. Analyysi osoittaa dokumenttioliomalliin kohdistuvan optimoidun hahmontamisen ratkaisujen runsauden - samaan lopputulokseen on mahdollista päästä monella eri tavalla, menetelmien tarjotessa toisistaan eroavia vahvuuksia.
  • Kähönen, Simo (2020)
    Context. Software Product Line (SPL) is a set of software system products that have common features and product-specific features. Dynamic Software Product Line (DSPL) is an SPL that features runtime variability. Objective. The main objective of this study is to evaluate the latest research related to SPL dynamic variability in general. The second objective of this study is to investigate dynamic variability modeling methods and tools that are utilized and introduced for SPLs by the scholars. The third objective of this study is to investigate testing methods and tools that are utilized and introduced for DSPLs by the scholars. Method. The scientific research method of this study is Systematic Literature Review (SLR). Papers included are published between years 2015 and 2017. Four scientific digital libraries were used as data sources for the papers. Results. The main result of this study is that between years 2015 and 2017, there has been an active research community studying SPL dynamic variability. For all 25 papers included in this study, on a scale of 0 to 10, the arithmetic mean of the quality scores is 7.14 (median is 7.5). One industrial practice DSPL implementation case study was presented by the scholars. Three other case studies seemed to be more or less simplified exemplar of industry practice DSPL implementations. Two studies were focusing in testing aspects of DSPLs. The second result of this study is that scholars have utilized 19 existing dynamic variability modeling methods for SPLs, and introduced 17 new dynamic variability modeling methods for SPLs. Scholars have utilized seven existing dynamic variability modeling tools for SPLs, and introduced four new dynamic variability modeling tools for SPLs. The third result of this study is that scholars have introduced four new testing methods for DSPLs, and utilized two existing testing tools for DSPLs. Conclusions. The general conclusion of this study is that albeit SPL dynamic variability has been actively studied between years 2015 and 2017, there are still open research areas, especially in the field of industry practice use and testing of DSPLs. 2012 ACM Computing Classification System (CCS): Software and its engineering -> Software creation and management -> Software development techniques -> Reusability -> Software product lines Software and its engineering -> Software creation and management -> Designing software -> Software design engineering
  • Häggblom, Svante (2019)
    Background: User experience (UX) is seen as an important quality of a successful product and software companies are becoming increasingly interested in the field of UX. As UX has the goal to improve the experience of users, there is a need for better methods in measuring the actual experience. One aspect of UX is to understand the emotional aspect of experience. Psychophysiology studies the relations between emotions and physiology and electrodermal activity (EDA) has been found to be a physiological measurement of emotional arousal. Aims: The aim of this thesis is researching the utility of measuring EDA to identify moments of emotional arousal during human-computer interaction. By studying peaks in EDA during software interaction we expect to find issues in the software that work as triggers or stimuli for the peaks. Method: We used the design science methodology to develop EDAMUX. EDAMUX is a method to unobtrusively observe users, while gathering significant interaction moments through self reporting and EDA. A qualitative single-case study was conducted to evaluate the utility of EDAMUX. Results: We found that we can discover causes of bad user experience with EDAMUX. Moments of emotional arousal, derived from EDA, was found in conjunction with performance issues, usability issues and bugs. Emotional arousal was also observed during software interaction where the user was blaming themself. Conclusions: EDAMUX shows potential in discovering issues in software that are difficult to find with methods that rely on subjective self-reporting. Having the potential to objectively study emotional reactions is seen as valuable in complementing existing methods of measuring user experience.
  • Shestovskaya, Jamilya (2020)
    Nowadays the number of connected devices is growing sharply. Mobile phones and other IoT devices are inherent parts of everyday life and used everywhere. The amount of data generated by IoT devices and mobile phones is enormous, which causes network congestions. In turn, the usage of centralized cloud architecture increases delay and cause jitter. To address those issues the research community discussed the new trend of decentralization – edge computing. There are different edge compute architectures suggested by various researchers. Some are more popular and supported by global companies. Most of those architectures have similarities. In this research, we reviewed seven edge compute architectures. This thesis is a comparative analysis carried out by using key attributes and presentation of the Venn diagram to select the right edge compute architecture.
  • Kovala, Jarkko (2020)
    Internet of Things (IoT) has the potential to transform many domains of human activity, enabled by the collection of data from the physical world at a massive scale. As the projected growth of IoT data exceeds that of available network capacity, transferring it to centralized cloud data centers is infeasible. Edge computing aims to solve this problem by processing data at the edge of the network, enabling applications with specialized requirements that cloud computing cannot meet. The current market of platforms that support building IoT applications is very fragmented, with offerings available from hundreds of companies with no common architecture. This threatens the realization of IoT's potential: with more interoperability, a new class of applications that combine the collected data and use it in new ways could emerge. In this thesis, promising IoT platforms for edge computing are surveyed. First, an understanding of current challenges in the field is gained through studying the available literature on the topic. Second, IoT edge platforms having the most potential to meet these challenges are chosen and reviewed for their capabilities. Finally, the platforms are compared against each other, with a focus on their potential to meet the challenges learned in the first part. The work shows that AWS IoT for the edge and Microsoft Azure IoT Edge have mature feature sets. However, these platforms are tied to their respective cloud platforms, limiting interoperability and the possibility of switching providers. On the other hand, open source EdgeX Foundry and KubeEdge have the potential for more standardization and interoperability in IoT but are limited in functionality for building practical IoT applications.
  • Katila, Nina (2020)
    Tutkielmassa tarkastellaan tieteellisen tutkimuksen ja ammattikirjallisuuden pohjalta keinoja ja näkökohtia tietojärjestelmien integraatiotestauksen automatisointiin. Tutkimusmetodologiana on tapaustutkimus (Case Study). Tutkimuksen tapausympäristönä on eduskunnan lainsäädäntötyön tietojärjestelmien välisen integraatiotestauksen automatisoinnin edellytykset ja eri toteutusvaihtoehdot. Tutkielmaa varten tietoa on kerätty eduskunnan tietojärjestelmien dokumentaatiosta sekä eduskunnan tietojärjestelmien asiantuntijoilta. Eduskunnan integraatiotestauksen työnkulut ja testauksen haasteet perustuvat havaintoihin, joita on tehty osallistumalla eduskunnan integraatiotestaukseen noin vuoden ajan. Automatisointivaihtoehtojen analysointi ja evaluointi perustuvat jatkuvaan yhteistyöhön eduskunnan lainsäädäntötyön tietojärjestelmien ja integraatiojärjestelmän asiantuntijoiden kanssa. Eduskunnan lainsäädäntötyön tietojärjestelmät ovat toiminnallisesti ja hallinnollisesti itsenäisiä sekä toteuttajiltaan, toteutukseltaan ja iältään erilaisia. Koska itsenäisten järjestelmien ohjelmakoodi ei ole järjestelmien välillä saatavilla, on integraatiotestauksen automatisoinnin ratkaisun perustuttava järjestelmien käyttöliittymien kautta saavutettavaan koodiin. Tutkimuksessa havaittiin, että ohjelmistorobotiikan (Robotic Process Automation, RPA) avulla voidaan jäljitellä eduskunnan testaajien järjestelmien käyttöliittymien kautta suorittamaa integraatiotestausta. Tutkimuksessa havaittiin, että testauksen automatisointiin ja ohjelmistorobotiikkaan soveltuvan automatisointikehyksen avulla on mahdollista automatisoida eduskunnan tietojärjestelmien integraatiotestaus. Ohjelmistorobotiikalla integraatiotestit saadaan suoritettua manuaalista testausta merkittävästi nopeammin ja vähemmillä resursseilla. Käyttöliittymiin perustuvan testausautomaation merkittävin haittapuoli on testien ylläpidon kustannukset. Modulaarisen avainsanapohjaisen automatisointikehyksen avulla voidaan tietojärjestelmien automatisoituja testejä ja niiden osia käyttää uudelleen integraatiotestauksen automatisoinnissa ja näin säästää kustannuksissa.
  • Räsänen, Hannele (2020)
    Nowadays, with the influence of global economy large corporations use global software development to utilise advantages of geographically decentralised organisations and global outsourced software development. Through distributed organisations the work can be done around the clock. Global software development is impacted by three distance dimensions: time distance, geographical distance, and socio-cultural distance, which all bring some challenges. At the same time agile way of working has become more and more popular method in software development. As agile practises are created for co-located teams there is a demand for having working online solutions for communication and collaboration in distributed teams. Corporations use scaled agile way of working to support software develop-ment of large initiatives and projects. Scaled Agile Framework (SAFe) is the most popular among the scaled agile methods. This thesis was conducted as a case study in a multinational corporation. Objective of the case study was to research effectiveness of scaled agile methodology SAFe on communication and collaboration in teams and agile release trains. The case study included two parts: a web-survey and interviews. The results of the analyses of the case study support findings from the literature in the field. The results indicate the importance of communication and collaboration in agile practices and the significance of the online tools that support it.
  • Dobrodeev, Vladimir (2020)
    Mobile networks become more and more pervasive. The growing demand for mobile connectivity fosters installations of new base stations and various low-power network elements. Furthermore, the upcoming 5th generation of mobile networks (5G) will require to significantly increase network elements' density. The growing number of network elements introduces new challenges for network management. Among these challenges there are scalability and an ability to handle larger volumes of data, which also have higher variety and require higher processing velocity for faster reaction on abnormal situations. The requirements introduced by data volume, variety and processing velocity require Big Data solutions. Data exchange in Big Data can be organized with a special Message Oriented Middleware (MOM), which is a software solution providing a scalable and reliable message-based communication. A popular communication paradigm for MOM is publish/subscribe. There are two main approaches to design of systems based on this paradigm, namely, topic-based and content-based. The first one offers lightweight message routing but has limited capabilities to describe required content. The second one eliminates these limitations at a cost of added complexity of the message routing resulting in higher requirements to middleware performance and message processing cost. The solution proposed in the thesis utilizes benefits of these two approaches. The solution combines expressiveness in describing interests and easy message routing from topic-based approach. In the core of the solution is an algorithm realizing a perfect matching between interests and topics. Constructing such a matching offers more efficient usage of system resources, e.g., network traffic, by utilizing overlaps between interests
  • Hantula, Otto (2019)
    Emergence of language grounded in perception has been studied in computational agent societies with language games. In this thesis language games are used to investigate methods for grounding language in practicality. This means that the emergence of the language is based on the needs of the agents. The needs of an agent arise from its goals and environment, which together dictate what the agents should communicate to each other. The methods for practicality grounding are implemented in a simulation, where agents fetch items from shelves in a 2D grid warehouse. The agents learn a simple language consisting of words for spatial categories of xy-coordinates and different types of places in the warehouse environment. The language is learned and used through two novel language games called the Place Game and the Query Game. In these games the agents use the spatial categories and place types to refer to different locations in the warehouse, exchanging important information that can be used to make better decisions. The empirical simulation results show that the agents can utilise their language to be more efficient in fetching items. In other words the emerged language is practical.
  • Poteri, Juho (2020)
    The Internet of Things (IoT) paradigm is seeing rapid adoption across multiple domains—industry, enterprise, agriculture, smart cities, households, only to name a few. IoT applications often require wireless autonomy, thereby placing challenging requirements on communication techniques and power supply methods. Wireless networking using devices with constrained energy, as often is the case in wireless sensor networks (WSN), provokes explicit considerations around the conservation of the supplied power on the one hand and the efficiency of the power drawn and energy used on the other. As radio communications characteristically consume the bulk of all energy in wireless IoT systems, this constrained energy budget combined with aspirations for terminal device lifetime sets requirements for the communications protocols and techniques used. This thesis examines two open architecture low-power wide-area network (LPWAN) standards with mesh networking support, along with their energy consumption profile in the context of power-constrained wireless sensor networks. The introductory section is followed by an overview of IoT and WSN foundations and technologies. The following section describes the IEEE 802.15.4 standard and ecosystem, followed by the Bluetooth LE and Bluetooth Mesh standards. A discussion on these standards' characteristics, behavior, and applicability to power-constrained sensor networks is presented.
  • Rantapelkonen, Antti (2020)
    A self-driving car must be able to observe and predict behavior of other road users in the environment where the states are only partly observable. Sensor data provides accurate identification of type, location, speed, and orientation of other road users, but predicting their intentions is difficult for artificial intelligence. The problem can be solved with partially observable Markov decision process (POMDP), which provides mathematical framework for decision making in uncertain situations. Nevertheless, the challenge for POMDP is real-time computation. Solving POMDP is mathematically intractable, therefore, POMDP solvers are used for approximations that are sufficiently accurate. Additionally, scalability for adequate number of other road users is challenging for many POMDP solvers. This master thesis is a literature survey in which four research papers are analyzed. The research papers provide solution for uncertainty in self driving cars decision making using POMDP with different solver algorithms in intersection and crosswalk scenarios.
  • Judin, Toni (2020)
    Estimation is considered as an essential part of the software engineering project. Lately on the internet there have been discussions about the viability of estimation in software projects with a keyword ''NoEstimates''. The “members” of the NoEstimates movement are creating content about e.g., is estimation always necessary, and what is the “best practice” for estimation. At the time of this thesis relative estimation (Story Points) was considered as the best practice for estimation in the agile community and at the University of Helsinki. The aim of this thesis is to find answers to the questions: is estimation always necessary in software projects, and is there any other viable way to do estimation than relative and absolute estimation methods. In this thesis a literature review and a semi-structured interview was used as a main research methods. The main source for finding the new viable estimation method is the content made by the NoEstimates movement. In addition five software engineering professionals were interviewed for this thesis to get information about currently used estimation methods in software companies, and opinions about estimation methods and estimation in general. The conclusion of this study is that the estimation is almost always an essential part of the software engineering project. Relative estimation is rarely used in Finnish software companies even so it is considered as the best practice. According to this study, the most commonly used estimation method in software companies in Finland is absolute estimation. In this thesis there is an introduction to a rarely mentioned estimation method outside of the NoEstimation context. The method is named “item counting method” and it looks like to be at least a viable addition to the estimation toolbox.
  • Husu, Tuomas (2020)
    System administration is a traditional and demanding profession in information technology that has gained little attention from human-computer interaction (HCI) research. System administrators operate in a highly complex environment to keep business applications running and data available and safe. In order to understand the essence of system administrators' skill, this thesis reports individual differences in 20 professional system administrators’ task performance, task solutions, verbal reports, and learning histories. A set of representative tasks were designed to measure individual differences, and structured interviews were used to collect retrospective information about system administrators’ skill acquisition and level of deliberate practice. Based on the measured performance, the participants were divided into three performance groups. A group of five system administrators stood out from the 20 participants. They completed more tasks successfully, they were faster, they predicted their success more accurately, and they expressed more confidence during performance and anticipation. Although they had extensive professional experience, the study found no relationship between duration of experience and level of expertise. The results are aligned with expert-performance research from other domains — the highest levels of performance in system administration are attained as a result of a systematic practice. This involves an investment of effort and makes the activity less enjoyable than competing activities. When studying the learning histories, the quantity and quality of the programming experience and other high-effort computer-related problem-solving activities were found to be the main differentiating factors between the 'expert' and less-accomplished participants.
  • Luhtakanta, Anna (2019)
    Finding and exploring relevant information from a huge amount of available information is crucial in today’s world. The information need can be a specific and precise search or a broad exploratory search, or even something between the two. Therefore, an entity-based search engine could provide a solution for combining these two search goals. The focus in this study is to 1) study previous research articles on different approaches for entity-based information retrieval and 2) implement a system which tries to provide a solution for both information need and exploratory information search, regardless of whether the search was made by using basic free form query or query with multiple entities. It is essential to improve search engines to support different types of information need in the incessantly expanding information space.
  • Saarinen, Tuomo (2020)
    The use of machine learning and algorithms in decision making processes in our every day lifehas been growing rapidly. The uses range from bank loans and taxation to criminal sentencesand child care decisions. Because of the possible high importance of such decisions, we need tomake sure that the algorithms used are as unbiased as possible.The purpose of this thesis is to provide an overview of the possible biases in algorithm assisteddecision making, how these biases affect the decision making process, and go through someproposes on how to tackle these biases. Some of the proposed solutions are more technical,including algorithms and different ways to filter bias from the machine learning phase. Othersolutions are more societal and legal and address the things we need to take into account whendeciding what can be done to reduce bias by legislation or by enlightening people on the issuesof data mining and big data.
  • Korhonen, Tuukka (2020)
    The task of organizing a given graph into a structure called a tree decomposition is relevant in multiple areas of computer science. In particular, many NP-hard problems can be solved in polynomial time if a suitable tree decomposition of a graph describing the problem instance is given as a part of the input. This motivates the task of finding as good tree decompositions as possible, or ideally, optimal tree decompositions. This thesis is about finding optimal tree decompositions of graphs with respect to several notions of optimality. Each of the considered notions measures the quality of a tree decomposition in the context of an application. In particular, we consider a total of seven problems that are formulated as finding optimal tree decompositions: treewidth, minimum fill-in, generalized and fractional hypertreewidth, total table size, phylogenetic character compatibility, and treelength. For each of these problems we consider the BT algorithm of Bouchitté and Todinca as the method of finding optimal tree decompositions. The BT algorithm is well-known on the theoretical side, but to our knowledge the first time it was implemented was only recently for the 2nd Parameterized Algorithms and Computational Experiments Challenge (PACE 2017). The author’s implementation of the BT algorithm took the second place in the minimum fill-in track of PACE 2017. In this thesis we review and extend the BT algorithm and our implementation. In particular, we improve the eciency of the algorithm in terms of both theory and practice. We also implement the algorithm for each of the seven problems considered, introducing a novel adaptation of the algorithm for the maximum compatibility problem of phylogenetic characters. Our implementation outperforms alternative state-of-the-art approaches in terms of numbers of test instances solved on well-known benchmarks on minimum fill-in, generalized hypertreewidth, fractional hypertreewidth, total table size, and the maximum compatibility problem of phylogenetic characters. Furthermore, to our understanding the implementation is the first exact approach for the treelength problem.
  • Simpura, Frans (2019)
    This thesis introduces, demonstrates, and evaluates a custom VR training content logic modeling approach, the VUTS method we have created. We inspect the content creation needs from the point of view of the occupational safety training focused VR platform, Virtuario™, developed at The Finnish Institute of Occupational Health. We review flow-based programming and Statecharts as comparison points to our approach and analyze techniques of secondary notation for their suitability for our needs. To define and evaluate our created hierarchical, visually representable flow approach, we use methods of design science to first define what we expect from an artifact that enables scalable, modular, and visualizable VR training content logic modeling: We test the artifact against requirements dictated by the pedagogical substance, software architecture development, and sustainable VR training content creation. We define the requirements for the artifact and test our developed approach against them by constructing real-life training scenarios utilizing the approach. We evaluate how this approach fares as the artifact to satisfy our set requirements. This thesis shows how our approach satisfies all the set requirements, is implementable within Unity3D game development platform, and is suitable for the content creation needs of Virtuario™.
  • Ahonen, Lauri (2020)
    Software development is massive industry today. Billions of dollars are spent and created on im- material products, and the stakes are very high. The failure modes of commercial software projects have been extensively studied, motivated by the large amounts of money in play. Meanwhile, failures in open source projects have been studied less, despite open-source projects forming a massive part of the computing industry ecosystem today. As more and more companies depend on open-source projects, it becomes imperative to understand the motivations and problems in the volunteer-staffed projects. This thesis opens with an introduction into the history of open source, followed by an overview of the actual day-to-day minutia of the tools and processes used to run a project. After this background context has been established, the existing body of research into open-source motivation is surveyed. The motivation of the people working on these projects has been studied extensively, as it seems illogical that highly-skilled volunteers pour their efforts into supporting a trillion-dollar industry. The existing body of motivation research establishes why people work on open-source projects in general, but it does not explain which projects they choose to work on. Developers drift between projects unguided, as they are free to choose where they allocate their time and energies. The contributions into open-source projects follow a Pareto distribution, and the majority of projects never manage to attract large amounts of contributors. Others lose steam after an internal or external shock drives away the contributors. To explore the latter phenomenon, four case studies are done into crises various open-source projects have faced, and how the actions of project leadership has affected the outcome for the project. Two of the shocks are caused by illegal activities by a project member, and two are caused by social disagreements.
  • Heinonen, Jyrki (2020)
    Conventional Data warehouse main theme is ’single version of truth’ with either dimensional modeling option or normalized 3NF modeling. These both techniques have issues because on the way to data warehouse data is cleansed/transformed and data ends up changed, hence loosing information. Data Vault modeling - as response to these issues - is detail oriented and tracks history keeping the audit trail intact. This means we have ’single version of facts’ or ’all the data, all of the time’. Data Vault methodology and architecture can handle Big Data and NoSQL, which are also covered in this work on the Data Lake section. Data Lake tools have evolved strongly during the last decade and response to the ever expanding data amounts using distributed computing tactics. Data Lake can also ingest different types of structured, semi-structured and unstructured data. Data warehouse (and Data Lake) processing is moving from on-premises server rooms to the cloud data centers. Specifically Apache and Google have developed and inspired a lot of new tools, which can process data warehouse data on petabyte-scale. Now the challenge is that not only operational systems generate data to data warehouse but also huge amounts of machine-generated data has to be processed and analyzed on these practically infinitely scalable platforms. Data warehouse solution has to cover also machine-learning requirements. So the modernization of data warehouse is not over but still all these methodologies, architectures and tools are in use. The trick is to choose the right tool for the right job.
  • Aronen, Timo (2020)
    Microservice architecture style has become one of the most popular software development ar- chitectural styles in the recent years. Microservice architecure is not defining an entirely new independent software architecture, but it is a specific way to implement service oriented archi- tecture. Together with the automated software deployment processes it enables organisations to publish new features and fixes to their applications in a rapid and lean way. Microservice architecture style is usually seen as an alternative way to implement applications compared to the more traditional way where an application, often called a monolith, is a single executable piece of software. Monoliths are thought to be more difficult to develop and maintain, limited in scalability, causing more downtime in production and causing technology lock-ins. Although microservices can resolve some problems with monolithic application, microservices are introducing new different kinds of problems that must be addressed. For example, the amount of deployable components and the complexity of the whole system are increasing. Many organisations are planning to migrate their existing monolithic applications to a set of microservices. However, it is not clear when this kind of migration is cost-efficient. This thesis is a case study of an organisation which is trying to find alternative for their monolithical style and which has already migrated some applications to microservices. Using the organisation as a case study, we will reflect how some theories from some selected literature are working in practise.