Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by study line "Mjukvarusystem"

Sort by: Order: Results:

  • Erälaukko, Hannu (2022)
    Ilmataistelussa on keskeistä pystyä laskemaan, missä kulkee raja sille, milloin kohteen on mahdollista altistua vihollisen ohjusuhalle eli missä sijaistee kohteen ja vihollisen välissä olevan maantieteellisen LAR-alueen (Launch Acceptability Region) raja. Tämä raja voidaan laskea ohjuksen lentämistä simuloivalla simulointiohjelmistokomponentilla. Tämän tutkielman tavoitteena oli löytää ratkaisu, joka mahdollistaisi rajojen sijaintien laskemisen eräässä reaaliaikasovelluksessa siten, että laskeminen perustuisi tällaisen simulointiohjelmistokomponentin tuloksiin, vaikka komponentti itse on liian hidas käytettäväksi kyseisessä reaaliaikasovelluksessa. Päätettiin, että simulointiohjelmistokomponentti korvattaisiin koneoppimismenetelmällä, joka on opetettu jäljittelemään simulointiohjelmistokomponenttia. Ratkaisun löytämiseksi perehdyttiin erilaisiin koneoppimismenetelmiin, joista yksi valittiin alustavaksi ratkaisuksi. Valitun koneoppimismenetelmän, neuroverkon, teoriaan perehdyttiin kirjallisuuskatsauksella, jotta sen kehittämisen tueksi saatiin tietämystä. Neuroverkosta kehitettiin lopullinen ratkaisu Design Science Research -prosessilla. Neuroverkko osoittautui riittävän nopeaksi, että sitä voidaan käyttää halutussa reaaliaikasovelluksessa. Neroverkon kyky jäljitellä simulointiohjelmistokomponentin tuloksia osoittautui myös riittävän tarkaksi.
  • Laurinen, Tomi (2021)
    This thesis introduces Robot Configurator, a software for performing variability configuration for a robot cooperation software named Cooperative Brain Service. The variability configuration problem concerns selecting appropriate combinations of components to form a valid complete product. It is applicable to the realm of software as well, as there are approaches where an instance of software is formed from different pre-made components. Cooperative Brain Service is the main product of Creative and Adaptive Cooperation between Diverse Autonomous Robots (CACDAR), a University of Helsinki research project. One of the main principles of the project is enabling cooperation between various types of robots. In Cooperative Brain Service, this is taken into account by having support for any new robots and actions be added as new modules. In the current implementation, the modules to be loaded are determined during startup, through a manually written JSON configuration file. The problem is, managing such JSON files requires beforehand knowledge of what modules there are implemented in Cooperative Brain Service. As it is crucial to be able to flexibly experiment with different cooperation scenarios in CACDAR, I design Robot Configurator as a tool to assist in the creation and management of these JSON configuration files. The main contribution of this thesis is Robot Configurator, for which I provide an architecture description that describes and documents it from multiple stakeholder perspectives. Additionally, a novel approach to modeling variability involved in the initialization of Cooperative Brain Service is also introduced.
  • Lipo, Vili (2021)
    Tutkielmassa esitetään yksikkötestaus rooli ketterässä ohjelmistokehityksessä ja integraatiotes- tauksen ja yksikkötestauksen erottamisen vaikeutuminen nykyisten käytäntöjen ja teknologisten ratkaisuiden vaikutuksesta. Tapaustutkimuksen kohteena on keskisuuri ohjelmisto-organisaatio Cinia Ohjelmistoratkaisut, joka soveltaa ketteriä menetelmiä kehitysprosessissaan yrityksiä. Tapaustutkimuksen tiedonkeräysmenetelminä käytetään haastatteluja ja kyselyä. Tapaustutkimuksessa näytetään, että kolmessa Cinian ohjelmistoprojektissa keskeisimmät si- dosryhmät ovat samanmielisiä yksikkötestauksen tavoitteista ja laadusta projektissa. Ohjelmis- tokehittäjät ovat pääosin yksimielisiä yksikkötestauksen määritelmästä, mutta yksikkötestauk- sen ja integraatiotestauksen suhteesta ei ole muodostunut yksikäsitteistä kollektiivista määri- telmää. Cinian ohjelmistoratkaisujen yksikkötestausteknologoissa ei havaittu huomattavia kehityskoh- teita, mutta yksittäisiä teknologioita haluttiin laajemmin käyttöön ja kehittäjien haluttiin pa- remmin ymmärtävän, millä tasolla ja minkälaista testiautomaatiota kannattaa kirjoittaa. Ci- nian ohjelmistoratkaisujen yksikkötestauskäytännöissä huomattavin kehityskohde oli osaamisen jakaminen eri kehitystiimien välillä. Muita kehityskohteita olivat työmääräarviot ja yksikkötes- tauksen asema yhteisessä valmiin määritelmässä. Saatujen tutkielman yhteydessä havaittujen asioiden perusteella esitämme jatkotutkimusta tehtäväksi seuraavista asioista: kehittäjän kir- joittaman testiautomaation painopisteistä ja työmääräarvioista ketterässä ohjelmistokehityk- sessä.
  • Haatanen, Heini (2020)
    Ohjelmistorobotiikka on joukko teknologioita, joiden avulla voidaan automatisoida rutiiniprosesseja toimistotyössä. Ohjelmistorobotiikan hyödyntämisen päätavoitteena onkin korvata ihmisten toistuvia ja tylsiä toimistotehtäviä automaation avulla. Kun ohjelmistorobotti suorittaa tehtävän prosessia, se jäljittelee ihmisen toimintaa. Tämän pro gradu tutkimuksen aiheena on ohjelmistorobotiikan hyödyntämisen edellytykset. Tutkimuksen tavoitteena on tutkia millaisia taitoja ja kyvykkyyksiä vaaditaan organisaatiolta, jotta se voi automatisoida liiketoimintaansa ohjelmistorobotiikan avulla. Tuloksia voidaan hyödyntää tulevissa ohjelmistorobotiikka-automatisoinneissa. Tämä tutkimus on kvalitatiivinen eli laadullinen semi-strukturoitu haastattelu. Tutkimuksen aineisto kerättiin kyselylomakkeen avulla tehtävistä haastatteluista. Kyselylomakkeen tukena käytettiin kirjallisuuskatsausta. Tutkimuksessa käytettiin taustatiedon hakuun lumipallo-otantaa. Tutkimuksessa hyödyntämisen edellytyksiä tutkittiin kolmesta näkökulmasta; prosessin, asiantuntijan ja organisaation näkökulmasta. Näin ollen tutkimuksen tuloksetkin kuvaavat näitä osa-alueita erikseen. Kuitenkin kaikilla osa-alueilla on oleellinen suhde toisiinsa. Tutkimuksessa saatujen tulosten perusteella tärkeä vaatimus organisaation näkökulmasta automatisoinnille on, että automatisoitava prosessi pystytään kuvaamaan tarkasti ja voidaan ylipäätään automatisoida. Asiantuntijoilta vaaditaan yleisesti loogista ajattelukykyä, teknologiaymmärrystä ja kommunikaatiotaitoa. Edellytettävät taidot ovat riippuvaisia siitä, minkä alan asiantuntija on kyseessä. Roolista riippumatta vaaditaan perusymmärrystä ohjelmoinnista ja RPA-työkaluista. Prosessilta vaaditaan, että se on rutiininomainen, toistuva ja sääntöperusteinen. Automatisoinnin on myös oltava kannattavaa. Tutkimuksen tuloksia voivat hyödyntää kaikki, jotka miettivät työtehtävien automatisointia ohjelmistorobotiikan avulla. ACM Computing Classification System (CCS): •Social and professional topics~Professional topics~Computing and business~Automation •Information systems~Information systems applications~Process control systems
  • Laukkanen, Olli (2020)
    Decision-making is an important part of all of software development. This is especially true in the context of software architecture. Software architecture can even be thought of as a set of architectural decisions. Decision-making therefore plays a large part in influencing the architecture of a system. This thesis studies architecturally significant decision-making in the context of a software development project. This thesis presents the results of a case study where the primary source of data was interviews. The case is a single decision made in the middle of a subcontracted project. It involves the development team and several stakeholders from the client, including architects. The decision was handled quickly by the development team when an acute need for a decision arose. The work relating to the decision-making was mostly done within the agile development process used by the development team. Only the final approval from the client was done outside the development process. This final approval was given after the decision was already decided in practise and an implementation based on it was built. This illustrates how difficult it is to incorporate outside decision-making into software development. The decision-making also had a division of labour where one person did the researching and preparing of the the decision. This constituted most of the work done relating to the decision. This type of division of labour may perhaps be generalized further into other decision-making elsewhere within software development generally.
  • Lassila, Atte (2019)
    Modern software systems increasingly consist of independent services that communicate with each other through their public interfaces. Requirements for systems are thus implemented through communication and collaboration between different its services. This creates challenges in how each requirement is to be tested. One approach to testing the communication procedures between different services is end-to-end testing. With end-to-end testing, a system consisting of multiple services can be tested as a whole. However, end-to-end testing confers many disadvantages, in tests being difficult to write and maintain. When end-to-end testing should adopted is thus not clear. In this research, an artifact for continuous end-to-end testing was designed and evaluated it in use at a case company. Using the results gathered from building and maintaining the design, we evaluated what requirements, advantages and challenges are involved in adopting end-to-end testing. Based on the results, we conclude that end-to-end testing can confer significant improvements over manual testing processes. However, because of the equally significant disadvantages in end-to-end testing, their scope should be limited, and alternatives should be considered. To alleviate the challenges in end-to-end testing, investment in improving interfaces, as well as deployment tools is recommended.
  • Siipola, Tuomas (2021)
    Ohjelmointikielet ovat digitaalisen infrastruktuurin perusta, jonka avulla ohjelmoijat ja yritykset rakentavat muun muassa tietojärjestelmiä ja erilaisia tuotteita. Ohjelmointikielten kehitystä, käyttöä ja teoreettisia ominaisuuksia on tutkittu paljon, mutta laajasti käytettyjen ohjelmointikielten jatkokehityksestä ja siihen liittyvästä päätöksenteosta löytyy vasta vähän kirjallisuutta. Tässä tutkielmassa ohjelmointikielten jatkokehitystä tutkitaan sosiaalisena ilmiönä, jossa erityisinä kiinnostuksen kohteina ovat kehitystyön koordinointiin käytetyt prosessit ja missä roolissa prosessiin osallistuvat henkilöt toimivat. Tutkielmassa perehdytään avoimen lähdekoodin yhteisössä kehitettävään Rust-ohjelmointikieleen, jonka muutoksista ja kehityssuunnasta keskustellaan ja päätetään julkisen Request for Comments -prosessin kautta. Tutkimukseen käytetyillä kuvailevan tilastotieteen menetelmillä ja sosiaalisten verkostojen analyysilla tarkastellaan, kuinka prosessi toimii käytännössä ja kuinka osallistujat vaikuttavat prosessin kulkuun.
  • Speer, Jon (2020)
    The techniques used to program quantum computers are somewhat crude. As quantum computing progresses and becomes mainstream, a more efficient method of programming these devices would be beneficial. We propose a method that applies today’s programming techniques to quantum computing, with program equivalence checking used to discern between code suited for execution on a conventional computer and a quantum computer. This process involves determining a quantum algorithm’s implementation using a programming language. This so-called benchmark implementation can be checked against code written by a programmer, with semantic equivalence between the two implying the programmer’s code should be executed on a quantum computer instead of a conventional computer. Using a novel compiler optimization verification tool named CORK, we test for semantic equivalence between a portion of Shor’s algorithm (representing the benchmark implementation) and various modified versions of this code (representing the arbitrary code written by a programmer). Some of the modified versions are intended to be semantically equivalent to the benchmark while others semantically inequivalent. Our testing shows that CORK is able to correctly determine semantic equivalence or semantic inequivalence in a majority of cases.
  • Sipilä, Suvi (2020)
    Clean and high-quality code affects the maintainability of software throughout the software lifecycle. Cleanliness and high-quality should be pursued from the software development phase. Nowadays, the software is developed rapidly, which is why the code must be easy to maintain. When the code is easy to maintain, it can basically be managed by any software developer. The thesis conducted a literature review of clean and high-quality code. The thesis aimed to find out what is clean and high-quality code in the class and function level. The purpose of the thesis was to explain why clean and high-quality code is necessary and how the clean and high-quality code can be improved with different tools such as metrics, refactoring, code review, and unit tests. The thesis also included a survey for software developers. The survey sought an answer to how clean and high-quality code practices are implemented in working life from the perspective of software developers. 103 software professionals responded to the survey. Based on the responses, 82,5 \% of respondents felt that they always or usually write clean and high-quality code. The main reasons why clean and high-quality code cannot be written were the challenges of the old codebase and schedule pressures. Writing code is a very people-oriented job, so we must understand the code and its purpose. The code must be simple and carefully written. When the code is clean and high-quality, it is easier to read and understand, and thus easier to maintain.
  • Mäkelä, Nicolas (2020)
    The goal of real-time rendering is to produce synthetic and usually photorealistic images from a virtual scene as part of an interactive application. A scene is a set of light sources and polygonal objects. Photorealism requires a realistic simulation of light, but it contains a recursive problem where light rays can bounce between objects countless of times. The objects can contain hundreds of thoudands of polygons, so they cannot be processed recursively in real-time. The subject of this thesis is a voxel-based lighting method, where the polygonal scene is processed into a voxel grid. When calculating the indirect bounces of light, we can process a small amount of voxels instead of the vast amount of polygons. The method was introduced in 2011, but it didn't gain much popularity due to its performance requirements. In this thesis, we studied the performance and image quality of the voxel-based lighting algoritm with a modern, low-cost graphics card. The study was conducted through design research. The artefact is a renderer that produces images with the voxel-based algorithm. The results show that the algorithm is capable of a high frame rate of 60 images per second in a full-hd resolution of 1920x720 pixels. However, the algorithm consumes most of the time spent forming the image, which doesn't leave much time to simulate a game world for example. In addition, the voxelization of the scene is a slow operation, which would require some application-specific optimizations to be performed every frame in order to support dynamically moving objects. The image quality improves greatly when compared to a scene that doesn't calculate indirect light bounces, but there is a problem of light bleeding through solid objects.
  • Hyytiälä, Otto (2021)
    Remote sensing satellites produce massive amounts of data of the earth every day. This earth observation data can be used to solve real world problems in many different fields. Finnish space data company Terramonitor has been using satellite data to produce new information for its customers. The Process for producing valuable information includes finding raw data, analysing it and visualizing it according to the client’s needs. This process contains a significant amount of manual work that is done at local workstations. Because satellite data can quickly become very big, it is not efficient to use unscalable processes that require lot of waiting time. This thesis is trying to solve the problem by introducing an architecture for cloud based real-time processing platform that allows satellite image analysis to be done in cloud environment. The architectural model is built using microservice patterns to ensure that the solution is scalable to match the changing demand.
  • Anafi, Babatunde Olamilekan (2021)
    Cultural Heritage (CH) collections, data, and artefacts used to be available mainly in galleries, libraries, archives, and museums (GLAM). Due to digitisation, huge collections of CH data are now available online. Since the appearance of these data online, CH data have been searched, analysed and presented using various methods and visualisations. However, many of these methods do not take adequate advantage of the nature of CH data. Even when the CH data dimensions are utilised, the choices of visualisation sometimes fall short. This thesis is a part of the Finnish Archaeological Finds Recording Linked Open Database (SuALT) project. The project aims to develop digital Web services to cater for archaeological finds made by members of the public, especially metal detectorists through ‘citizen science’. Therein, a portion of an innovative prototype that tries to enable serendipitous knowledge discovery by making searching and visualising CH data seamless is presented. The artefact would serve as part of the basis of the FindSampo portal. The prototype is not implemented from scratch, but it is based on the Sampo model, the Sampo-UI framework and several third-party libraries and visualisations. The prototype includes mainly a user-centric faceted search engine, keyword searches to filter the facet options. It also includes data-analytic tools for visualising the filtered data on a table, a timeline chart, pie chart, line chart, maps and an option to export the results in CSV form for further analysis in external tools. However, this thesis focuses on the faceted search engine and the timeline visualisation that presents the spatio-temporal nature of the SuALT data. The portion of the prototype was evaluated based on three principles: serendipity, generosity, and criticality. And the result of the user experience survey suggests that the prototype could provide a good starting point to explore the find collection, make access to individual find information easier, improves the serendipitous discovery of archaeological find data and also make the find data analysis and interpretation easier.
  • Salo, Jukka-Pekka (2020)
    User experience has become vital for many software development projects but the software development methods and tools are not originally intended for it. Moreover, software development is fundamentally complex and an increasingly social profession. This shift towards designing for user experience as a diverse group has brought new challenges to software development. The objective of this study is to find how developers and designers form a shared understanding of the software system UX under development. Central theme are the activities of UX work: what are the methods in use (e.g. User-Centered Design, Agile) and how do they work in practice, that is, what kind of information developers and designers share and what kind of artifacts do they produce in collaboration. This study answers two research questions: (RQ1) How do developers and designers form a shared understanding of the software system UX under development; and (RQ2) What are the artifacts utilized in their collaboration. To answer the research questions, a single case study research was conducted by interviewing the employees of a Finnish startup company. The company develops enterprise resource planning software (ERP) for rental businesses. The results show that shared understanding of the UX is achieved with UX activities throughout the system’s lifecycle where the user participation is required from the beginning of new software development. Furthermore, the artifacts in combination with developers’ participation in some of the UX activities will convey the design intent to the implemented software.
  • Saaristo, Verner (2021)
    Johtamisjärjestelmät ovat johtamisen tueksi luotuja työkaluja, joilla pystytään arvioimaan tilannekuvaa, suunnittelemaan operaatioita ja toimintalinjoja sekä vastaanottamaan ja lähettämään tietoa toimintaympäristöön. Puolustusorganisaatioissa erilaisia simulaattoreita käytetään kouluttamaan henkilöstöä tositilanteita varten oikeassa tai virtuaalisessa ympäristössä. Virtuaaliympäristössä kalliiden ja vaikeasti luotavien tilanteiden harjoittelu pystytään toteuttamaan kustannustehokkaasti ja mielekkäästi. Tutkielman tarkastelukohteena on se, miten ja millä tekniikoilla on kannattavinta mahdollistaa simulaattoreiden ja johtamisjärjestelmien yhteentoimivuus. Samalla selvitetään konkreettiset hyödyt, jotka muodostuvat simulaattoreiden ja johtamisjärjestelmien yhteistoiminnan tuloksena saavutettavien kyvykkyyksien muodossa. Osana tutkielmaa on toteutettu kokeellinen järjestelmä hyödyntäen uutta johtamisjärjestelmien ja simulaattorien yhteentoimivuuden standardia C2SIM:iä. Järjestelmän tarkoituksena on tukea standardin tuotantokelpoisuuden analyysia ja kyvykkyyksien havainnollistamista. Kokeellisen järjestelmän toteutuksen ja sen analyysin perusteella C2SIM todettiin kykenemättömäksi tuotantokäyttöön erityisesti puutteellisen tuen vuoksi. Ainakaan kokeellisessa järjestelmässä mukana olleet simulaattorijärjestelmät eivät tue standardia täydellisesti sen nykymuodossa, eikä käyttämämme johtamisjärjestelmä tarjonnut liitännäistä ollenkaan. Johtamisjärjestelmien ja simulaattoreiden yhteentoimivuuden tukemiseksi on täten suositeltava joko eri järjestelmille erikseen räätälöityjä ratkaisuja niiden ohjelmistokehitystyökaluja hyödyntäen tai yhdistelmäratkaisuja, jotka hyödyntävät useampaa olemassa olevaa standardia.
  • Liukkonen, Jussi (2020)
    Tässä tutkielmassa käydään läpi jatkuvan integraation ja toimittamisen suunnittelu ja toteutus sovelluksille, jotka pyörivät pilvipalveluissa. Jatkuvaa integraatiota ja toimittamista sovelletaan usein vain yhteen sovellukseen keskittyen, tässä tapauksessa samaa jatkuvan integraation ja toimittamisen putkea on tarkoitus käyttää useissa kymmenissä eri sovelluksessa siten, että putken ylläpito ja kehitys voidaan toteuttaa yhtenä kokonaisuutena. Toteutus on tehty GitLabin jatkuvalla integraatiolla, jonka avulla helpotetaan sovellusten kehitystä. Sovellukset pyörivät Docker konteissa, joita hallitaan konttien hallintaympäristö Kubernetesissa, joka on sijoitettuna Microsoft Azuren pilvipalveluun. Tutkielmassa käydään tapaustutkimuksen pohjaksi läpi Dockerin, Kubernetesin, Azuren sekä GitLab CI:n toiminnan perusteita.
  • Eloranta, Juha-Pekka (2020)
    Single-page application (SPA) model has become a popular way of building web-applications. It makes the user experience of a website more similar to desktop-applications. This is achieved by not having to make a request to backend for each page navigation and operation. However the SPA model brings some challenges of distributed data management to basic web-applications. Managing distributed consistency is a perennial research topic in computer science. Yet this has received little attention in single-page application context. This thesis compares single-page applications to distributed databases and aims to identify techniques from them that could be used in single-page applications. The comparison begins by looking at different distributed databases and analysing which of them matches closest to the single-page application model. Then the comparison focuses on two topics. Techniques used by distributed databases for replicating data from site to another are presented and compared to techniques used in web application in communication between the server and the browser. Next topic of the thesis is to study how consistency related models like ACID-properties along with isolation levels and anomalies are realized or manifested in single-page applications. As a result of the comparison it was observed that the data transfer methods used in distributed databases and web-applications were somewhat different from each other. Distributed systems favor push model for replication and replication is automatic from application developers perspective. Web applications ofter use pull-model and implementing replication is application developers responsibility. A set of consistency anomalies that can be manifested in single-page applications where found while analysing the consistency topic. The findings give a good starting point for developing libraries that could solve some of the problems that were found.
  • Kousa, Jami (2020)
    Teaching DevOps is challenging as it is inherently cross-functional between development and operations. This thesis presents an examination of a course and its development iterations in an attempt to find an approach that may be used in DevOps education in a higher education context. The course focuses on learning a tool in the DevOps toolchain, containerization with Docker, in a massively open online course (MOOC). By investigating a course during its design process and its attendees, challenges that students and course instructors faced are discussed. The primary source of information from students is a survey that students answered before and after participating in the course. The challenges of teaching DevOps practices vary from the teaching methods to types of exercises and level of industry imitation or abstraction. In comparison, students had fewer challenges than expected with the course contents. The survey results offered insight into the student experience concerning DevOps, unveiling a demand for both further development of the course as well as for new courses that link development and operations.
  • Vainio, Marko (2021)
    When developing an application using a microservice architecture the application consists of multiple distributed independent and loosely coupled services. These services then communicate with each other through a network in order to form a functioning application. Benefits of developing an application as a set of independent services as opposed to a single monolithic application are numerous. The services may be developed and deployed independently, which enables, for example, the usage of different programming language for a specific service. Services designed for specific tasks are also usually relatively small in size and as such easier to develop, understand and test. The challenges of building an application utilising a microservice architecture as opposed to the traditional monolithic one include identifying suitable functionalities that can be extracted into a service. Also, end-to-end testing of the extracted functionality becomes challenging. Throughout this thesis the most important benefits and challenges of the microservice architecture are investigated with a literature review as well as in practice with a case study. During the case study a specific functionality in a largely monolithic application was transformed into a microservice. The benefits and challenges that became evident during the process are covered in the thesis.
  • Hyttinen, Miika (2022)
    An industrial classification system is a set of classes meant to describe different areas of business. Finnish companies are required to declare one main industrial class from TOL 2008 industrial classification system. However, the TOL 2008 system is designed by the Finnish authorities and does not serve the versatile business needs of the private sector. The problem was discovered in Alma Talent Oy, the commissioner of the thesis. This thesis follows the design science approach to create new industrial classifications. To find out what is the problem with TOL 2008 indus- trial classifications, qualitative interviews with customers were carried out. Interviews revealed several needs for new industrial classifications. According to the customer interviews conducted, classifications should be 1) more detailed, 2) simpler, 3) updated regularly, 4) multi-class and 5) able to correct wrongly assigned TOL classes. To create new industrial classifications, un- supervised natural language processing techniques (clustering) were tested on Finnish natural language data sets extracted from company websites. The largest data set contained websites of 805 Finnish companies. The experiment revealed that the interactive clustering method was able to find meaningful clusters for 62%-76% of samples, depending on the clustering method used. Finally, the found clusters were evaluated based on the requirements set by customer interviews. The number of classes extracted from the data set was significantly lower than the number of distinct TOL 2008 classes in the data set. Results indicate that the industrial classification system created with clustering would contain significantly fewer classes compared to TOL 2008 industrial classifications. Also, the system could be updated regularly and it could be able to correct wrongly assigned TOL classes. Therefore, interactive clustering was able to satisfy three of the five requirements found in customer interviews.
  • Koskinen, Marko (2021)
    Modern software systems often produce vast amounts of software usage data. Previous work, however, has indicated that such data is often left unutilized. This leaves a gap for methods and practices that put the data to use. The objective of this thesis is to determine and test concrete methods for utilizing software usage data and to learn what use cases and benefits can be achieved via such methods. The study consists of two interconnected parts. Firstly, a semi-structured literature review is conducted to identify methods and use cases for software usage data. Secondly, a subset of the identified methods is experimented with by conducting a case study to determine how developers and managers experience the methods. We found that there exists a wide range of methods for utilizing software usage data. Via these methods, a wide range of software development-related use cases can be fulfilled. However, in practice, apart from debugging purposes, software usage data is largely left unutilized. Furthermore, developers and managers share a positive attitude towards employing methods of utilizing software usage data. In conclusion, software usage data has a lot of potential. Besides, developers and managers are interested in putting software usage data utilization methods to use. Furthermore, the information available via these methods is difficult to replace. In other words, methods for utilizing software usage data can provide irreplaceable information that is relevant and useful for both managers and developers. Therefore, practitioners should consider introducing methods for utilizing software usage data in their development practices.