Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by discipline "Tietojenkäsittelytiede"

Sort by: Order: Results:

  • Liao, Ke (2020)
    With the development of web services like E-commerce, job hunting websites, movie websites, recommendation system plays a more and more importance role in helping users finding their potential interests among the overloading information. There are a great number of researches available in this field, which leads to various recommendation approaches to choose from when researchers try to implement their recommendation systems. This paper gives a systematic literature review of recommendation systems where the sources are extracted from Scopus. The research problem to address, similarity metrics used, proposed method and evaluation metrics used are the focus of summary of these papers. In spite of the methodology used in traditional recommendation systems, how additional performance enhancement methods like machine learning methods, matrix factorization techniques and big data tools are applied in several papers are also introduced. Through reading this paper, researchers are able to understand what are the existing types of recommendation systems, what is the general process of recommendation systems, how the performance enhancement methods can be used to improve the system's performance. Therefore, they can choose a recommendation system which interests them for either implementation or research purpose.
  • Smirnova, Inna (2014)
    There is an increasing need for organizations to collaborate with internal and external partners on a global scale for creating software-based products and services. Many aspects and risks need to be addressed when setting up such global collaborations. Different types of collaborations such as engineering collaborations or innovation-focused collaborations need to be considered. Further aspects such as cultural and social aspects, coordination, infrastructure, organizational change process, and communication issues need to be examined. Although there are already experiences available with respect to setting up global collaborations, they are mainly focusing on certain specific areas. An overall holistic approach that guides companies in systematically setting up global collaborations for software-based products is widely missing. The goal of this thesis is to analyze existing literature and related information and to extract topics that need be taken into account while establishing global software development collaborations - to identify solutions, risks, success factors, strategies, good experiences as well as good examples. This information is structured in a way so that it can be used by companies as a well-grounded holistic approach to guide companies effectively in setting up long-term global collaborations in the domain 'software development'. The presented approach is based on scientific findings reported in literature, driven by industry needs, and confirmed by industry experts. The content of the thesis consists of two main parts: In the first part a literature study analyzes existing experience reports, case studies and other available literature in order to identify what aspects and practices need to be considered by organizations when setting up global collaborations in the domain software development. Secondly, based on the results from the literature review and consultation with the industrial partner Daimler AG, the identified aspects and practices are structured and prioritized in the form of activity roadmaps, which present a holistic guide for setting up global collaborations. The developed guidance worksheet, the so-called 'Global canvas', is meant to be a guide and reminder of all major activities that are necessary to perform when doing global collaborations for software-based products and services. The main contributions of this thesis are an analysis of the state of the practice in setting-up of global software development collaborations, identification of aspects and successful practices that need to be addressed by organizations when doing global collaborations for software-based products and services and the creation of a holistic approach that presents scientific findings to industry in an effective and credible way and guides companies in systematically setting up global collaborations.
  • Mukhtar, Usama (2020)
    Sales forecasting is crucial for run any retail business efficiently. Profits are maximized if popular products are available to fulfill the demand. It is also important to minimize the loss caused by unsold stock. Fashion retailers face certain challenges which make sales forecasting difficult for the products. Some of these challenges are the short life cycle of products and introduction of new products all around the year. The goal of this thesis is to study forecasting methods for fashion. We use the product attributes for products in a season to build a model that can forecast sales for all the products in the next season. Sales for different attributes are analysed for three years. Sales for different variables vary for values which indicate that a model fitted on product attributes may be used for forecasting sales. A series of experiments are conducted with multiple variants of the datasets. We implemented multiple machine learning models and compared them against each other. Empirical results are reported along with the baseline comparisons to answer research questions. Results from first experiment indicate that machine learning models are almost doing as good as the baseline model that uses mean values as predictions. The results may improve in the upcoming years when more data is available for training. The second experiment shows that models built for specific product groups are better than the generic models that are used to predict sales for all kinds of products. Since we observed a heavy tail in the data, a third experiment was conducted to use logarithmic sales for predictions, and the results do not improve much as compared to results from previous methods. The conclusion of the thesis is that machine learning methods can be used for attribute-based sales forecasting in fashion industry but more data is needed, and modeling specific groups of products bring better results.
  • Thakur, Mukesh (2017)
    Over past decade cloud services have enabled individuals and organizations to perform different types of tasks such as online storage, email services, on-demand movies and TV shows. The cloud services has also enabled on-demand deployment of applications, at cheap cost with elastic and scalable, fault tolerant system. These cloud services are offered by cloud providers who use authentication, authorization and accounting framework based on client-server model. Though this model has been used over decades, study shows it is vulnerable to different hacks and it is also inconvenient to use for the end users. In addition, the cloud provider has total control over user data which they are able to monitor, trace, leak and even modify at their will. Thus, the user data ownership, digital identity and use of cloud services has raised privacy and security concern for the users. In this thesis, Blockchain and its applications are studied and alternative model for authentication, authorization and accounting is proposed based on Ethereum Blockchain. Furthermore, a prototype is developed which enables users to consume cloud services by authenticating, authorizing and accounting with a single identity without sharing any private user data. Experiments are run with the prototype to verify that it works as expected. Measurements are done to assess the feasibility and scalability of the solution. In the final part of the thesis, pros and cons of the proposed solution are discussed and perspectives for further research are sketched.
  • Kallonen, Leo (2020)
    RPA (Robotic process automation) is an emerging field in software engineering that is applied in a wide variety of industries to automate repetitive business processes. While the tools to create RPA projects have evolved quickly, testing in these projects has not yet received much attention. The purpose of this thesis was to study how the regression testing of RPA projects created using UiPath could be automated while avoiding the following most common pitfalls of test automation projects: unreliability, too high cost, lack of re-usable components and too difficult implementation. An automated regression test suite was created as a case study with UiPath for an existing RPA project that is currently being tested manually. The results imply that UiPath can be used to also create the regression test suite, not just the RPA project. The automated test suite could be used to run all the tests in the regression test suite that is currently run manually. The common test automation project pitfalls were also mostly avoided: the structure of the project can be re-used for other test projects, the project can recover from unexpected errors and the implementation of the tests does not require a high level of programming knowledge. The main challenge proved to be the implementation cost which was increased by the longer then expected test development time. Another finding was that the measures taken to address test automation project pitfalls will likely work only with RPA projects that are simpler or as complex as the sample RPA project. With more complex projects, there will also likely be more challenges with test data creation. As a result, for complex projects, manual regression testing could be a better option.
  • Vainio, Antero (2020)
    Nowadays the Internet is being used as a platform for providing a wide variety of different services. That has created challenges related to scaling IT infrastructure management. Cloud computing is a popular solution for scaling infrastructure, either by building a self-hosted cloud or by using cloud platform provided by external organizations. This way some the challenges related to large scale can be transferred to the cloud administrators. OpenStack is a group of open-source software projects for running cloud platforms. It is currently the most commonly used software for building private clouds. Since initially published by NASA and Rackspace, it has been used by various organizations such as Walmart, China Mobile and Cern nuclear research institute. The largest production deployments of OpenStack clouds consist of thousands of physical server computers located in multiple datacenters. The OpenStack community has created many deployment methods that take advantage of automated software configuration management. The deployment methods are built with state of the art software for automating different administrative tasks. They take different approaches to automating infrastructure management for OpenStack. This thesis compares some of the automated deployment methods for OpenStack and examines the benefits of using automation for configuration management. We present comparisons based on technical documentations as well as reference literature. Additionally, we conducted a questionnaire for OpenStack administrators about the use of automation. Lastly, we tested one of the deployment methods in a virtualized environment.
  • Stenudd, Juho (2013)
    This Master's Thesis describes one example on how to automatically generate tests for real-time protocol software. Automatic test generation is performed using model-based testing (MBT). In model-based testing, test cases are generated from the behaviour model of the system under test (SUT). This model expresses the requirements of the SUT. Many parameters can be varied and test sequences randomised. In this context, real-time protocol software means a system component of Nokia Siemens Networks (NSN) Long Term Evolution (LTE) base station. This component, named MAC DATA, is the system under test (SUT) in this study. 3GPP has standardised the protocol stack for the LTE eNodeB base station. MAC DATA implements most of the functionality of the Medium Access Control (MAC) and Radio Link Control (RLC) protocols, which are two protocols of the LTE eNodeB. Because complex telecommunication software is discussed here, it is challenging to implement MBT for the MAC DATA system component testing. First, the expected behaviour of a system component has to be modelled. Because it is not smart to model everything, the most relevant system component parts that need to be tested have to be discovered. Also, the most important parameters have to be defined from the huge parameter space. These parameters have to be varied and randomised. With MBT, a vast number of different kind of users can be created, which is not reasonable in manual test design. Generating a very long test case takes only a short computing time. In addition to functional testing, MBT is used in performance and worst-case testing by executing a long test case based on traffic models. MBT has been noticed to be suitable for challenging performance and worst-case testing. This study uses three traffic models: smartphone-dominant, laptop-dominant and mixed. MBT is integrated into continuous integration (CI) system, which automatically runs MBT test case generations and executions overnight. The main advantage of the MBT implementation is the possibility to create different kinds of users and simulate real-life system behaviour. This way, hidden defects can be found from test environment and SUT.
  • Gafurova, Lina (2018)
    Automatic fall detection is a very important challenge in the public health care domain. The problem primarily concerns the growing population of the elderly, who are at considerably higher risk of falling down. Moreover, the fall downs for the elderly may result in serious injuries or even death. In this work we propose a solution for fall detection based on machine learning, which can be integrated into a monitoring system as a detector of fall downs in image sequences. Our approach is solely camera-based and is intended for indoor environments. For successful detection of fall downs, we utilize the combination of the human shape variation determined with the help of the approximated ellipse and the motion history. The feature vectors that we build are computed for sliding time windows of the input images and are fed to a Support Vector Machine for accurate classification. The decision for the whole set of images is based on additional rules, which help us restrict the sensitivity of the method. To fairly evaluate our fall detector, we conducted extensive experiments on a wide range of normal activities, which we used to oppose the fall downs. Reliable recognition rates suggest the effectiveness of our algorithm and motivate us for improvement.
  • Kesulahti, Aki (2019)
    Tutkielmassa käydään läpi autonomisten ajoneuvojen mahdollisia tietoturvauhkia sekä uhkien riskitasoja. Eri hyökkäystavoista ja -tyypeistä selvitetään mitä heikkouksia ne hyödyntävät, ja miten näiltä hyökkäyksiltä voi koittaa suojautua. Potentiaaliset tietoturvauhat jakautuvat uhkiin ajoneuvon toimintaan sekä ajoneuvossa olevan henkilön yksityisyyden suojan uhkiin. Ajoneuvon toiminnan uhat jakautuvat vielä hyökkäyksiin suoraan itse ajoneuvon järjestelmiin, hyökkäyksiin ajoneuvoon välillisesti verkottuneiden ajoneuvojen ja telematiikan kautta, sekä hyökkäyksiin viihdejärjestelmien ja kannettavien laitteiden kautta. Ajoneuvon järjestelmistä tietoturvauhkia on muun muassa navigointijärjestelmässä kuten kartta-aineistossa ja satelliittipaikannuksessa, ajoneuvon lähisensoreissa kuten kameroissa, lidarissa, tutkassa ja akustisissa sensoreissa, sekä ajoneuvon sisäisissä laitteissa ja antureissa kuten esimerkiksi langattomissa rengaspainesensoreissa. Verkottuneiden ajoneuvojen kautta tietoturvauhkia on erilaiset väärennetyt viestit turvallisuusjärjestelmään tai ruuhkanhallintaan, palveluestohyökkäys, karttatietojen myrkytys tai salanimen vaihtamisen häirintä. Myös infrastruktuurin kautta voidaan hyökätä palvelunestohyökkäyksenä, väärennetyin turvallisuusviestein tai karttatiedolla. Ajoneuvon CAN-väylään voidaan hyökätä sekä ajoneuvon omien sensoreiden kautta, että verkottuneilla viesteillä toisesta ajoneuvosta tai infrastruktuurista. CAN-väylään voi tehdä palveluestohyökkäyksiä ja valeviestejä. Viihdejärjestelmistä ja kannettavista laitteista voi hyökätä muun muassa FM-radion, mediatiedostojen tai älypuhelimen kautta. Yksityisyyden suojaa uhkaavat salakuuntelu ja -katselu, sekä erilaiset tavat seurata ajoneuvon sijaintia kuten nopeuden seuranta, suuret joukot autonomisia ajoneuvoja ja kilometripohjainen tienkäyttömaksu. Keinoja suojautua tietoturvauhilta on käyty läpi sekä jokaisen hyökkäyksen kohdalla erikseen, että kootusti kategorisoituna tietotekniseen suojaukseen, sensoridatan suojaukseen sekä yksityisyyden suojan suojaukseen. Perinteiset tietotekniset suojaukset kuten autentikaatio, suojatut yhteydet ja palomuurit ovat tarpeellisia myös autonomissa ajoneuvoissa. Sensoridatan luotettavuutta ja vikasietoisuutta voidaan parantaa suodatuksella, sensorifuusiolla ja parviennusteella. Lisäksi lohkoketjulla on esitetty ratkaisumalleja yksityisyyden suojan, kuten sijainnin seurannan, ongelmiin verkottuneissa ajoneuvoissa. Läpikäytävistä uhista on tehty riskianalyysi riskin suuruuden selvittämiseksi. Riskianalyysissa käytettiin hyökkäyksen onnistumisen todennäköisyyttä ja hyökkäyksen vaikutuksia, joista saatiin riskimatriisilla riskin suuruus. Suurimpia riskejä hyökkäyksissä autonomisiin ajoneuvoihin ovat satelliittipaikannuksen huijaussignaali ja häirintä, sekä kartta-aineiston vaihtaminen, sähkömagneettinen pulssi sekä tutkan toimintaa haittaava häivemateriaali. Suurimpia riskejä hyökkäyksissä verkottuneisiin ajoneuvoihin ja liikennetelematiikkaan ovat verkottuneiden ajoneuvon väärennetyt turvallisuusviestit verkottuneiden ajoneuvojen kautta tuleva karttatietojen myrkytys. Lopuksi tarkastelussa pohditaan riskien hyväksyttävää tasoa, mietitään käytettyjen menetelmien luotettavuutta, sekä esitetään kirjoittajan omia mielipiteitä.
  • Heino, Lauri (2020)
    The suffix array is a space-efficient data structure that provides fast access to all occurrences of a search pattern in a text. Typically suffix arrays are queried with algorithms based on binary search. With a pre-computed index data structure that provides fast access to the relevant suffix array interval, querying can be sped-up, because the binary search process operates over a smaller interval. In this thesis a number different ways of implementing such an index data structure are studied, and the performance of each implementation is measured. Our experiments show that with relatively small data structures, one can reduce suffix array query times by almost 50%. There is a trade-off between the size of the data structure and the speed-up potential it offers.
  • Torkko, Petteri (2013)
    Organisaatioiden liiketoimintajärjestelmät ovat tyypillisesti organisaation toimintaan sopivaksi muokattuja suljettuja kokonaisuuksia, joiden on kuitenkin tarpeen integroitua muihin järjestelmiin. Integraatioalustat tarjoavat malleja ja palveluita, joiden avulla heterogeenisten järjestelmien keskinäistä tiedon ja prosessien jakoa voidaan korkealla tasolla yksinkertaistaa. Tutkielman tarkoituksena on löytää vanhentuneen integraatioalustan rinnalle modernimpi alusta. Vertaamalla olemassaolevaa alustaa tyypillisiin järjestelmäintegraatioissa käytettyihin menetelmiin, arkkitehtuureihin ja suunnittelumalleihin saadaan monien alustojen joukosta valittua yksi (Spring Framework ja sen laajennokset), jota tutkitaan tarkemmin. Käyttäjille suunnatun kyselyn avulla olemassaolevasta alustasta selvinneisiin ongelmakohtiin vertaamalla saadaan uudelle alustalle tehtyä maaliperustaiset vaatimukset, sekä niihin liittyvät metriikat. Alustojen vertailusta saatujen tulosten perusteella uusi alusta täyttää sille asetetut vaatimukset, ja paikkaa olemassaolevan alustan ongelmat.
  • Riippa, Väinö (2016)
    This Master's thesis is empirical literature review, which studies open data at the area of healthcare. The study represents what the open data is and how it has become the concept what it stands for today. At the first chapter we take a look at open data at general viewpoint. In the next chapter there will be comparing of the open data processes from the point of publisher and consumer. After the processes we take a look at the open data at the sectors of healthcare and welfare. Study will be done by examining the current practices, the application solutions and the expectations of open data. This study offers for reader an informative review about the process models regarding to open data. After reading the thesis there's possibility to use process model in data openings of the own organization.
  • Koivisto, Timo (2016)
    This thesis is a review of bandit algorithms in information retrieval. In information retrieval a result list should include the most relevant documents and the results should also be non-redundant and diverse. To achieve this, some form of feedback is required. This document describes implicit feedback collected from user interactions by using interleaving methods that allow alternative rankings of documents to be presented in result lists. Bandit algorithms can then be used to learn from user interactions in a principled way. The reviewed algorithms include dueling bandits, contextual bandits, and contextual dueling bandits. Additionally coactive learning and preference learning are described. Finally algorithms are summarized by using regret as a performance measure.
  • Sotala, Kaj (2015)
    This thesis describes the development of 'Bayes Academy', an educational game which aims to teach an understanding of Bayesian networks. A Bayesian network is a directed acyclic graph describing a joint probability distribution function over n random variables, where each node in the graph represents a random variable. To find a way to turn this subject into an interesting game, this work draws on the theoretical background of meaningful play. Among other requirements, actions in the game need to affect the game experience not only on the immediate moment, but also during later points in the game. This is accomplished by structuring the game as a series of minigames where observing the value of a variable consumes 'energy points', a resource whose use the player needs to optimize as the pool of points is shared across individual minigames. The goal of the game is to maximize the amount of 'experience points' earned by minimizing the uncertainty in the networks that are presented to the player, which in turn requires a basic understanding of Bayesian networks. The game was empirically tested on online volunteers who were asked to fill a survey measuring their understanding of Bayesian networks both before and after playing the game. Players demonstrated an increased understanding of Bayesian networks after playing the game, in a manner that suggested a successful transfer of learning from the game to a more general context. The learning benefits were gained despite the players generally not finding the game particularly fun. ACM Computing Classification System (CCS): - Applied computing - Computer games - Applied computing - Interactive learning environments - Mathematics of computing - Bayesian networks
  • Wikström, Axel (2019)
    Continuous integration (CI) and continuous delivery (CD) can be seen as an essential part of modern software development. CI/CD consists of always having software in a deployable state. This is accomplished by continuously integrating the code into a main branch, in addition to automatically building and testing it. Version control and dedicated CI/CD tools can be used to accomplish this. This thesis consists of a case study which aim was to find the benefits and challenges related to the implementation of CI/CD in the context of a Finnish software company. The study was conducted with semi-structured interviews. The benefits of CD that were found include faster iteration, better assurance of quality, and easier deployments. The challenges identified were related to testing practices, infrastructure management and company culture. It is also difficult to implement a full continuous deployment pipeline for the case project, which is mostly due to the risks involved updating software in business-critical production use. The results of this study were found to be similar to the results of previous studies. The case company's adoption of modern CI/CD tools such and GitLab and cloud computing are also discussed. While the tools can make the implementation of CI/CD easier, they still come with challenges in adapting them to specific use cases.
  • Tuominen, Pasi (2015)
    Tietovarannoissa esiintyy monesti useita tietueita, jotka kuvaavat samaa objektia. Tässä tutkielmassa on vertailtu näiden tietueiden löytämiseen käytettäviä menetelmiä. Kokeet on suoritettu aineistolla, jossa on 6,4 miljoonaa bibliografista tietuetta. Menetelmien vertailussa käytettiin aineistossa olevien teosten nimekkeitä. Eri menetelmien kahta keskeistä piirrettä on mitattu: löydettyjen duplikaattien lukumäärää ja niiden suhdetta muodostettujen kandidaattien lukumäärään. Kahden menetelmän yhdistelmä osoittautui parhaaksi aineiston deduplikointiin. Järjestetyllä naapurustolla löytyi eniten varsinaisia duplikaatteja, mutta myös eniten irrelevantteja kandidaatteja. Suffiksitauluryhmittelyn avulla löytyi lisäksi joukko duplikaatteja joita muilla menetelmillä ei löytynyt. Yhdessä nämä kaksi menetelmää löysivät lähes kaikki duplikaatit mitä kaikki tutkielmassa verratut menetelmät löysivät. Levenshtein-etäisyyteen perustuvat virhesietoiset menetelmät osoittautuivat tehottomiksi nimekkeiden deduplikoinnissa.
  • Toivonen, Mirva (2015)
    Big data creates variety of business possibilities and helps to gain competitive advantage through predictions, optimization and adaptability. Impact of errors or inconsistencies across the different sources, from where the data is originated and how frequently data is acquired is not considered in much of the big data analysis. This thesis examines big data quality challenges in the context of business analytics. The intent of the thesis is to improve the knowledge of big data quality issues and testing big data. Most of the quality challenges are related to understanding the data, coping with messy source data and interpreting analytical results. Producing analytics requires subjective decisions along the analysis pipeline and analytical results may not lead to objective truth. Errors in big data are not corrected like in traditional data, instead the focus of testing is moved towards process oriented validation.
  • Röyskö, Visa (2020)
    Bottiverkot ovat sellaisten laitteiden verkkoja, jota on saastutettu haittaohjelmalla. Verkon hallitsija voi antaa näille koneille komentoja ja komentaa ne tekemään hyökkäyksiä. Hyökkäyksiä ovat muun muassa hajautetut palvelunestohyökkäykset ja roskapostin lähettäminen. Tässä opinnäytetyössä verrataan kolmea erilaista bottiverkko-ohjelmistoa. Vertailussa käydään läpi niiden topologiaa, sekä erityispiirteitä. Lopuksi käydään läpi erilaisia tapoja bottiverkoilta suojautumiseen.
  • Suominen, Kalle (2013)
    Business and operational environments are becoming more and more frenetic, forcing companies and organizations to respond to changes faster. This trend reflects to software development as well, IT units have to deliver needed features faster in order to bring business benefits quicker. During the last decade, agile methodologies have provided tools to answer to this ever-growing demand. Scrum is one of the agile methodologies and it is widely used. It is said that in large-scale organizations Scrum implementation should be done using both bottom-up and top-down approaches. In big organizations software systems are complicated and deeply integrated with each other meaning that no one team can handle whole software development processes alone. Individual teams want to start to use Scrum before whole organization is ready to support it. This leads to a situation where one team is applying agile principles while most of the other teams and organizations around are continuing with old established non-agile practices. In these cases bottom-up approach is the only option. When the top-down part is missing, are the benefits also lost? In this case study, the target is to find out, did it bring benefits when implementing Scrum using only bottom-up approach. In the target unit, which was part of the large organization, Scrum based practices were implemented to replace earlier waterfall based approach. Analyses for the study were made on data, which was collected by survey and from a requirement management tool. This tool was in use during the old and new ways of working. Expression Scrum based practices are used because all of the fine flavours of Scrum could not be able to be implemented because of surrounded non-agile teams and official non-agile procedures. This was also an obstacle when trying to implement Scrum as well as it could be possible. Most of the defined targets given to the implementation of Scrum based practices were achieved and other non-targeted benefit came out. In this context we can conclude that benefits were gained. The top-down approach absence clearly made the implementation more difficult and incomplete; however, it didn't prevent to get benefits. The target unit also faced earlier mentioned difficulties in using Scrum based practices while other units around used non-agile processes. The lack of good established numerical estimations of requirements' business values lowered the power of the Scrum on a company level, because these values were relative and subjective opinions of the business representatives, In the backlog prioritization, when most of the items are so called high priority ones there is no way to evaluate which one is more valuable and prioritization is more or less a lottery
  • Markkanen, Jani (2012)
    B-puut ovat yleisesti käytettyjä hakemistopuita. Tutkielmassa tutustutaan B-puiden samanaikaisuudenhallintaan ja elvytykseen erityisesti tietokannanhallintajärjestelmän kannalta. Tehokkaan samanaikaisuudenhallinnan tarjoavan Blink-puun algoritmeista esitellään solmujen poistojen seurantaan ja läpikäydessä rakennemuutoksien viimeistelyyn perustuvat algoritmit. Näistä jälkimmäinen toteutetaan ja sen tehokkuutta arvioidaan kokeellisesti. Kokeellisessa arvioinnissa huomataan, että lisäys- ja poisto-operaatioissa samanaikaisuudenhallinnan kustannus nousee jopa 94 %:iin arvioinnin maksimioperaatiotiheydellä. Samalla maksimioperaatiotiheydellä hakuoperaation samanaikaisuudenhallinta vie alle prosentin kokonaisajasta. Korkea samanaikaisuudenhallinnan kustannus lisäys- ja poisto-operaatioissa johtuu päivitysoperaatioiden U-salpaamasta juurisolmusta. Juurisolmun U-salpaus on usein turhan vahva toimenpide, sillä sitä tarvitaan vain 0,06 % päivitysoperaatioita, kun salpa halutaan korottaa kirjoittamista varten X-salvaksi. Puun juuren ruuhkan helpottamiseksi esitellään algoritmille jatkokehitysideoita, jotka perustuvat juuren U-salpauksen tarpeen harvinaisuuteen ja mahdollisuuteen aloittaa puun läpikäynti aina uudelleen puun juuresta.