Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by discipline "Computer science"

Sort by: Order: Results:

  • Markkanen, Jani (2012)
    B-puut ovat yleisesti käytettyjä hakemistopuita. Tutkielmassa tutustutaan B-puiden samanaikaisuudenhallintaan ja elvytykseen erityisesti tietokannanhallintajärjestelmän kannalta. Tehokkaan samanaikaisuudenhallinnan tarjoavan Blink-puun algoritmeista esitellään solmujen poistojen seurantaan ja läpikäydessä rakennemuutoksien viimeistelyyn perustuvat algoritmit. Näistä jälkimmäinen toteutetaan ja sen tehokkuutta arvioidaan kokeellisesti. Kokeellisessa arvioinnissa huomataan, että lisäys- ja poisto-operaatioissa samanaikaisuudenhallinnan kustannus nousee jopa 94 %:iin arvioinnin maksimioperaatiotiheydellä. Samalla maksimioperaatiotiheydellä hakuoperaation samanaikaisuudenhallinta vie alle prosentin kokonaisajasta. Korkea samanaikaisuudenhallinnan kustannus lisäys- ja poisto-operaatioissa johtuu päivitysoperaatioiden U-salpaamasta juurisolmusta. Juurisolmun U-salpaus on usein turhan vahva toimenpide, sillä sitä tarvitaan vain 0,06 % päivitysoperaatioita, kun salpa halutaan korottaa kirjoittamista varten X-salvaksi. Puun juuren ruuhkan helpottamiseksi esitellään algoritmille jatkokehitysideoita, jotka perustuvat juuren U-salpauksen tarpeen harvinaisuuteen ja mahdollisuuteen aloittaa puun läpikäynti aina uudelleen puun juuresta.
  • Levitski, Andres (2016)
    With the increase in bandwidths available for internet users, cloud storage services have emerged to offer home users an easy way to share files and extend the storage space available for them. Most systems offer a limited free storage quota and combining these resources from multiple providers could be intriguing to cost-oriented users. In this study, we will implement a virtual file system that utilizes multiple different commercial cloud storage services (Dropbox, Google Drive, Microsoft OneDrive) to store its data. The data will be distributed among the different services and the structure of the data will be managed locally by the file system. The file system will be run in user space using FUSE and will use APIs provided by the cloud storage services to access the data. Our goal is to show that it is feasible to combine the free space offered by multiple services into a single easily accessible storage medium. Building such a system requires making design choices in multiple problem areas ranging from data distribution and performance to data integrity and data security. We will show how our file system is designed to address these requirements and will then conduct several tests to measure and analyze the level of performance provided by our system in different file system operation scenarios. The results will also be compared to the performance of using the distinct cloud storage services directly without distributing the data. This will help us to estimate the overhead or possible gain in performance caused by the distribution of data. It will also help us to locate the bottlenecks of the system. Finally, we will discuss some of the ways that could be used to improve the system based on test results and examples from existing distributed file systems.
  • Osmani, Lirim (2013)
    With the recent advances in efficient virtualization techniques in using commodity servers cloud computing has emerged as a powerful technology to meet new requirements for supporting a new generation of computing services based on utility model. However barriers to widespread adoption still exists and the dominant platform is yet to be seen in years to come. Hence the challenge of providing scalable cloud infrastructures requires a continuous exploration of new technologies and techniques. This thesis describes an experimental investigation of integrating two such open source technologies, OpenStack and GlusterFS, to build our cloud environment. We designed a number of test case scenarios that help us answer the questions around performance, stability and scalability of the cloud infrastructure deployed. Additionally, the work based on this thesis was accepted to the Conference on Computing in High Energy and Nuclear Physics (CHEP2013), and the paper is due for publishing.
  • Hämäläinen, Heikki (2016)
    Tämä työ tutkii Clojure-ohjelmointikieltä, joka on erityisesti rinnakkaisohjelmointiin suunniteltu Lisp-kielen murre. Clojure tukee vahvaa liitosta Java-ympäristöön ja sillä kirjoitetut ohjelmat suoritetaan JVM-virtuaalikoneella. Tutkielmassa käydään läpi Lisp-kielten historia, rinnakkaisohjelmoinnin yleiset haasteet ja funktionaalisen ohjelmointiparadigman perusteet. Lisäksi käsitellään Java-kielen ja JVM-virtuaalikoneen ja Clojure-kielen rinnakkaisohjelmointipiirteet. Tutkielman analyysiosassa verrataan Clojuren ja Javan rinnakkaisuusratkaisuja muun muassa tehokkuuden ja käytettävyyden osalta. Clojuren rinnakkaisuusratkaisuista transaktiomuisti osoittautui laskennallisesti hyvin raskaaksi. Lisäksi rinnakkaisratkaisujen lukottomuudesta seuraa se, että tietyt rinnakkaisohjelmointiongelmat ovat hankalia toteuttaa ilman, että käytetään Javan rinnakkaisratkaisuja. Erityisesti synkronisten rinnakkaisratkaisujen osalta kielessä olisi kehittämisen varaa. Javaan verrattuna Clojuren rinnakkaisuusratkaisut ovat hieman yksinkertaisempia käyttää. Tämä johtuu kuitenkin pitkälle Clojuren dynaamisesta tyypityksestä ja funktionaalisesta perusrakenteesta.
  • Kesseli, Henri (2013)
    Embedded systems are everywhere. The variety of different types of embedded systems and purposes are wide. Yet, many of these systems are islands in an age where more and more systems are being connected to the Internet. The ability to connect to the Internet can be taken advantage in multiple ways. One is to take advantage of the resources cloud computing can offer. Currently, there are no comprehensive overviews how embedded systems could be enhanced by cloud computing. In this thesis we study what cloud enhanced embedded systems are and what their benefits, risks, typical implementation methods, and platforms are. This study is executed as an extended systematic mapping study. The study shows that the interest from academia and practice in cloud enhanced embedded systems has been growing significantly in recent years. The most prevalent research area is wireless sensor networks followed by the more recent research area Internet of things. Most of the technology is available for implementing cloud enhanced embedded systems but comprehensive development tools such as frameworks or middlewares are scarce. Results of the study indicate that existing embedded systems and other non-computing devices would benefit from connectivity and cloud resources. This enables the development of new applications for consumers and industry that would not be possible without cloud resources. As an indication of this we see several systems developed for consumers such as remotely controlled thermostats, media players that depend on cloud resources, and network attached storage systems that integrate with cloud access and discovery. The academic literature is full of use cases for cloud enhanced embedded systems and model implementations. However, the actual integration process as well as specific engineering techniques are rarely explained or scrutinized. Currently, the typical integration process is very custom to the application. There are few examples of efforts to create specific development tools, more transparent protocols, and open hardware to support the development of ecosystems for cloud enhanced embedded systems.
  • Al-Hello, Muhammed (2012)
    Biological cell is complicated and complex environment, in which thousands of entities interact surprisingly among each other. This integrated device continuously receives internal and external signals to perform the most vital processes to keep the continuation of life. Even though thousands of interactions are catalysed in very small spaces, biologists assert there are no coincidences or accidental events. On the other hand, fast discoveries in biology and the rapid evolution in the data pool make it even more difficult to construct concrete perspective that scientifically interprets all observations. Thereby, co-operation has become necessary between biologists, mathematicians, physicists and computer engineers. The goal of this virtual corporation is pursuance what is known as modelling biological network. The final thesis is aiming to make comparisons across different computational tools, which are built for modeling biological network. Additionally, technical themes, such as reaction kinetic, are explained beforehand. These topics represent backbone in the software functionality. Beside the technical issues, the study will compare different features such as GUI, command line, importing/exporting files, etc.
  • Davoudi, Amin (2018)
    In the Internet age, malware poses a serious threat to information security. Many studies have been conducted on using machine learning for detecting malicious software. Although major breakthroughs have been achieved in this area, the problem has not been completely eradicated. In this thesis, we are going through the concept of utilizing machine learning for malware detection and conduct several experiments with two different classifiers (Support Vector Machine and Naive Bayes) to compare their ability to detect malware based on Port-able Executable (PE) file format headers. A malware classifier dataset built with header field values of portable executable files was obtained from GitHub and used for experimental part of the thesis. We conducted 5 different experiments with several different trial settings. Various statistical methods have been used to assess the significance of the results. The first and second experiment show that using SVM and Naive Bayes classification methods for our dataset can result in high sensitivity rate. In the rest of the experiments, we focus on ac-curacy rate of both classifiers with different settings. The results show that although there were no big differences in the accuracy rates of the classifiers, the value of variance of ac-curacy rates is greater in Naive Bayes than in SVM. The study investigates ability of two different methods to classify information in their distinctive way. It also provides evidences that show that the learning-based approach provides a means for accurate automated analysis of malware behavior which helps in the struggle against malicious software.
  • Guo, Haipeng (2016)
    Along with the proliferation of smartphones, smartphone context-aware applications are gaining more and more attention from manufactures and users. With the capability to infer user's context information i.e., if the user is in a meeting, driving, running or at home, smartphone applications can react accordingly. However, limiting factors such as limited battery capacity, computing power and inaccuracy of inference caused by the in-accurate machine learning models and sensors hinder the large deployment of context-aware applications. In this master thesis, I develop CompleSense, a cooperative sensing framework designed for Android devices that facilitates the establishment and management of cooperation group so that developers can further exploit the potentials of cooperative sensing without worrying about the implementation of system monitoring, data throttling, aggregation and synchronization of data streams and wireless message passing via Wi-Fi. The system adopts Wi-Fi Direct technology for service advertisement and peer discovery. Once the cooperative group is formed, devices can share sensing and computing resources within short range via Wi-Fi connection. CompleSense allows developers to customize the system based on their own optimization needs, e.g., optimizing the trade-offs of cooperative sensing. System components are loosely coupled to ensure extensibility, resilience and scalability of the system, so that failure or change of a single component will not affect the remaining parts of the system. Developers can extend from the current system by adding customized data processing kernels, machine learning models and optimized sharing schemes. In addition to that, CompleSense abstracts the controlling logic of sensors, developers can easily integrate new sensors into the system by following a pre-defined a programming interface. The performance of CompleSense is evaluated by carrying out a cooperative audio similarity calculation task with varied number of clients which also confirms that CompleSense is feasible to be deployed for lower tier devices, such as Motorola Moto G.
  • Kruglaia, Anna (2016)
    Game design is a complicated, multifaceted creative process. While there are tools for developing computer games, tools that could assist with more abstract creative parts of the process are underrepresented in the domain. One of such parts is the generation of game ideas. Ideation (idea generation) is researched by the computational creativity community in the contexts of design, story and poetry generation, music, and others. Some of the existing techniques can be applied to ideation for games. The process of generating ideas was investigated by applying said techniques to actual themes from game jams. The possibility of using metaphors produced by Metaphor Magnet together with ConceptNet was tested and the results are presented as well.
  • Waltari, Otto Kustaa (2013)
    Advanced low-cost wireless technologies have enabled a huge variety of real life applications in the past years. Wireless sensor technologies have emerged in almost every application field imaginable. Smartphones equipped with Internet connectivity and home electronics with networking capability have made their way to everyday life. The Internet of Things (IoT) is a novel paradigm that has risen to frame the idea of a large scale sensing ecosystem, in which all possible devices could contribute. The definition of a thing in this context is very vague. It can be anything from passive RFID tags on retail packaging to intelligent transducers observing the surrounding world. The amount of connected devices in such a worldwide sensing network would be enormous. This is ultimately challenging for the current Internet architecture which is several decades old and is based on host-to-host connectivity. The current Internet addresses content by location. It is based on point-to-point connections, which eventually means that every connected device has to be uniquely addressable through a hostname or an IP address. This paradigm was originally designed for sharing resources rather than data. Today the majority of Internet usage consists of sharing data, which is not what it was originally designed for. Various patchy improvements have come and gone, but a thorough architectural redesign is required sooner or later. Information-Centric Networking (ICN) is a new networking paradigm that addresses content by name instead of location. Its goal is to replace the current where with what, since the location of most content on the Internet is irrelevant to the end user. Several ICN architecture proposals have emerged from the research community, out of which Content-Centric Networking (CCN) is the most significant one in the context of this thesis. We have come up with the idea of combining CCN with the concept of IoT. In this thesis we look at different ways on how to make use of the hierarchical CCN content naming, in-network caching and other information-centric networking characteristics in a sensor environment. As a proof of concept we implemented a presentation bridge for a home automation system that provides services to the network through CCN.
  • Tilli, Tuomo (2012)
    BitTorrent is one of the most used file sharing protocols on the Internet today. Its efficiency is based on the fact that when users download a part of a file, they simultaneously upload other parts of the file to other users. This allows users to efficiently distribute large files to each other, without the need of a centralized server. The most popular torrent site is the Pirate Bay with more than 5,700,000 registered users. The motivation for this research is to find information about the use of BitTorrent, especially on the Pirate Bay website. This will be helpful for system administrators and researchers. We collected data on all of the torrents uploaded to the Pirate Bay from 25th of December, 2010 to 28th of October, 2011. Using this data we found out that a small percentage of users are responsible for a large portion of the uploaded torrents. There are over 81,000 distinct users, but the top nine publishers have published more than 16% of the torrents. We examined the publishing behaviour of the top publishers. The top usernames were publishing so much content that it became obvious that there are groups of people behind the usernames. Most of the content published is video files with a 52% share. We found out that torrents are uploaded to the Pirate Bay website at a fast rate. About 92% of the consecutive uploads have happened within 100 seconds or less from each other. However, the publishing activity varies a lot. These deviations in the publishing activity may be caused by down time of the Pirate Bay website, fluctuations in the publishing activity of the top publishers, national holidays or weekdays. One would think that the publishing activity with so many independent users would be quite level, but surprisingly this is not the case. About 85% of the files of the torrents are less than 1.5 GB in size. We also discovered that torrents of popular feature films were uploaded to the Pirate Bay very fast after their release and the top publishers appear to be competing on who releases the torrents first. It seems like the impact of the top publishers is quite significant in the publishing of torrents.
  • Hemminki, Samuli (2012)
    In this thesis we present and evaluate a novel approach for energy-efficient and continuous transportation behavior monitoring for smartphones. Our work builds on a novel adaptive hierarchical sensor management scheme (HASMET), which decomposes the classification task into smaller subtasks. In comparison to previous work, our approach improves the task of transportation behavior monitoring on three aspects. First, by employing only the minimal set of necessary sensors for each subtask, we are able to significantly reduce power consumption of the detection task. Second, using the hierarchical decomposition, we are able to tailor features and classifiers for each subtask, improving the accuracy and robustness of the detection task. Third, we are able to extend the detectable motorised modalities to cover most common public transportation vehicles. All of these attributes are highly desirable for real-time transportation behavior monitoring and serve as important steps toward implementing the first truly practical transportation behavior monitoring on mobile phones. In the course of the research, we have developed an Android application for sensor data collection and utilized it to collect over 200 hours of transportation data, along with 2.5 hours of energy consumption data of the sensors. We apply our method on the data to demonstrate that compared to current state-of-art, our method offers higher detection accuracy, provides more robust transportation behavior monitoring and achieves significant reduction in power consumption. For evaluating results with respect to the continuous nature of the transportation behavior monitoring, we use event and frame-based metrics presented by Ward et al.
  • Hinkka, Atte (2018)
    In this thesis we use statistical n-gram language models and the perplexity measure for language typology tasks. We interpret the perplexity of a language model as a distance measure when the model is applied on a phonetic transcript of a language the model wasn't originally trained on. We use these distance measures for detecting language families, detecting closely related languages, and for language family tree reproduction. We also study the sample sizes required to train the language models and make estimations on how large corpora are needed for the successful use of these methods. We find that trigram language models trained from automatically transcribed phonetic transcripts and the perplexity measure can be used for both detecting language families and for detecting closely related languages.
  • Ray, Debarshi (2012)
    Pervasive longitudinal studies in people's intimate surroundings involve gathering data about how people behave in their various places of presence. It is hard to be fully pervasive as it has traditionally required sophisticated instrumentation that may be difficult to acquire and prohibitively expensive. Moreover, setting up such an experiment is laborious. We present a system, in the form of its requirements, design and implementation, that is primarily aimed at collecting data from people's homes. It aims to be as pervasive as possible, and can collect data about a family in the form of audio and video feed from microphones and cameras, network logs and home appliance (eg., TV) usage patterns. The data is then transported over the Internet to a server placed in the close proximity of the researcher, while protecting it from unauthorised access. Instead of instrumenting the test subjects' existing devices, we build our own integrated appliance which is to be placed inside their houses, and has all the necessary features for data collection and transportation. We build the system using cheap off-the-shelf commodity hardware and free and open source software, and evaluate different hardware and software configurations to see how well they can be integrated and how performant or reliable they are in real life scenarios. Finally, we demonstrate a few simple techniques that can be used to analyze the data to gain some insights into the behaviour of the participants.
  • Tulilaulu, Aurora (2017)
    Pro gradu -tutkielmassani esittelen datan perusteella ohjattavaa automaattista säveltämistä eli datamusikalisaatiota. Datamusikalisaatiossa on kyse datasta löytyvien muuttujien kuulumisesta automaattisesti sävelletyssä musiikissa. Tarkoitus olisi, että musiikki toimisi korville tarkoitetun visualisaation tavoin havainnollistamaan valittuja attribuutteja datasta. Erittelen tutkielmassa erilaisia tapoja, miten sonifikaatiota ja automaattista tai koneavustettua säveltämistä on tehty aikaisemmin sekä millaisia sovelluksia niillä on. Käyn läpi yleisimmät käytetyt tavat generoida musiikkia, kuten tyypillisimmät stokastiset menetelmät, kieliopit ja koneoppimiseen perustuvat menetelmät. Kerron myös lyhyesti sonifikaatiosta eli datan suorasta kuvaamisesta äänisignaalina ilman musiikillista elementtiä. Kommentoin erilaisten menetelmien vahvuuksia ja heikkouksia. Käsittelen lyhyesti myös sitä, mihin asti automatisoidussa säveltämisessä ja sen uskottavuudessa ihmisarvioijien silmissä on pisimmillään päästy. Käytän esimerkkinä muutamia erilaisia tunnustusta saaneita säveltäviä ohjelmia. Käsittelen kahta erilaista tekemääni musikalisaatio-ohjelmaa. Ensimmäinen generoi kappaleita tiivistäen käyttäjän yhdestä nukutusta yöstä kerätyn datan neljästä kahdeksaan minuuttia kestävään kappaleeseen. Toinen tekee musiikkia reaaliaikaisesti ja muutettavien parametrien pohjalta, jolloin sen pystyy kytkemään toiseen ohjelmaan, joka analysoi dataa ja muuttaa parametreja. Käsitellyssä esimerkissä musiikki tuotetaan keskustelulokin pohjalta ja esimerkiksi keskustelun sävy ja nopeus vaikuttavat musiikkiin. Käyn läpi tekemieni ohjelmien periaatteet musiikin generoimiselle. Käsittelen myös tehtyjen päätösten syitä käyttäen musiikin teorian ja säveltämisen perusteita. Selitän, millaisilla periaatteilla käytetty data kuuluu tai voidaan saada kuulumaan musiikissa, eli miten musikalisaatio eroaa tavallisesta konesäveltämisestä ja sonifikaatiosta, sekä miten se asettuu näiden kahden jo olemassa olevan tutkimuskentän rajoille. Lopuksi esittelen myös käyttäjäkokeiden tulokset, joissa käyttäjiä on pyydetty arvioimaan keskustelulokeista tehdyn musikalisaation toimivuutta, ja pohdin saatujen tulosten ja alan nykytilan pohjalta musikalisaation mahdollisia sovelluskohteita ja mahdollista tulevaa tutkimusta, jota aiheesta voisi tehdä.
  • Althermeler, Nicole (2016)
    Metagenomics promises to shed light on the functioning of microbial communities and their surrounding ecosystem. In metagenomic studies the genomic sequences of a collection of microorganisms are directly extracted from a specific environment. Up to 99% of microbes cannot be cultivated in the lab; thus, traditional analysis techniques have very limited applicability in this challenging setting. By directly extracting the sequences from the environment, metagenomic studies circumvents this dilemma. Thus, metagenomics has become a powerful tool in the analysis of the diversity and metabolic capability of environmental microbes. However, metagenomic studies have challenges of their own. In this thesis we investigate several aspects of metagenomic data set analysis, focusing on means of (1) verifying adequacy of taxonomic unit and enzyme representation and annotation in the sample, (2) highlighting similarities between samples by principal component analysis, (3) visualizing metabolic pathways with manually drawn metabolic maps from the Kyoto Encyclopedia of Genes and Genomes, and (4) estimating taxonomic distributions of pathways with a novel strategy. A case study of deep bedrock groundwater metagenomic samples will illustrate these methods. Water samples from boreholes, up to 2500 meter deep, of two different sites of Finland display the applicability and limitations of aforementioned methods. In addition publicly available metagenomic and genomic samples serve as baseline references. Our analysis resulted in a taxonomic and metabolic characterization of the samples. We were able to adequately retrieve and annotate the metabolic content based on the deep bedrock samples. The visualization provided a tool for further investigation. The microbial community distribution could be characterized on higher levels of abstraction. Previously suspected similarities to fungi or archaea were not verified. First promising results were observed with the novel strategy in estimating taxonomic distributions of pathways. Further results can be found at: http://www.cs.helsinki.fi/group/urenzyme/deepfun/
  • Bakharzy, Mohammad (2014)
    In the new era of digital economy, agility and the ability to adapt to market changes and customers' needs is crucial for sustainable competitiveness. It is vital to identify and consider customers' and users' needs in order to make fact-driven decisions and evaluate assumptions and hypotheses before actually allocating resources to them. Understanding the customers' needs and delivering valuable products or services based on deep customer insight, demands Continuous Experimentation. Continuous Experimentation refers to collecting customers' and users' feedback constantly and understand the real value of product and services to test new ideas and hypothesis as early as possible with minimum resource allocation. Experimentation requires a technical infrastructure including tools, methods, processes, interfaces and APIs to collect, store, visualize and analyze the data. This thesis analyses the state of the practice and state of the art regarding current tools with functionalities that support or might support continuous experimentation. The results of this analysis is a set of problems identified for current tools as well as a set of requirements to be fulfilled for tackling those problems. Among the problems, customizability of the tools to meet the needs of different companies and scenarios is of utmost importance. The lack of customizability in current tools offered companies to allocate their resources to develop their own proprietary tools tailored for their custom needs. Based on requirements that support better customizability, a prototype tool that supports continuous experimentation has been designed and implemented. The support of the tool is evaluated in a real-world scenario with respect to the requirements and customizability issue.
  • Ylikotila, Henri (2018)
    Self-aware computing is an emerging research area, which aims to solve issues stemming from a combination of increasingly more complex systems and diverse operating conditions. It adapts the key concepts of human self-awareness to the computational context. These concepts are well established in cognitive science, psychology, social psychology and philosophy, but novel in the software engineering context. Self-aware systems are able to acquire information about their environment and their internal state. The obtained information is used to build knowledge through reasoning. This increasing knowledge is used for building an internal learning model, which enables the system to adapt its actions. Self-aware systems can autonomously navigate runtime changes in their goals and environmental conditions, thus enabling a high degree of adaptivity to changing conditions that are difficult to predict at design time. In contrast, traditional systems have to operate with a set of predefined rules that are reliant on the design time knowledge of the designer. This study aims to identify proposed software architecture solutions that are of value for both practitioners and researchers. The study was conducted as a systematic literature review, for which we have developed a repeatable review protocol in order to cover all relevant literature in the area. The review protocol was explicitly defined and applied rigorously. In our review we managed to extract several proposed architecture designs from the reviewed 9 primary studies. These solutions propose several solutions, such as reference architectures, architecture frameworks, and architectural patterns for designing self-aware systems. This study can be used to get an overview of state of the art software architecture designs for self-aware systems. Additionally, this study can provide support for finding future research direction regarding self-aware systems.
  • Snellman, Mikael (2018)
    Today many of the most popular service provides such as Netflix, LinkedIn, Amazon and others compose their applications from a group of individual services. These providers need to deploy new changes and features continuously without any downtime in the application and scale individual parts of the system on demand. To address these needs the usage of microservice architecture has grown in popularity in recent years. In microservice architecture, the application is a collection of services which are managed, developed and deployed independently. This independence of services enables the microservices to be polyglot when needed, meaning that the developers can choose the technology stack for each microservice individually depending on the nature of the microservice. This independent and polyglot nature of microservices can make developing a single service easier, but it also introduces significant operations overhead when not taken into account when adopting the microservice architecture. These overheads include the need for extensive DevOps, monitoring, infrastructure and preparation for distributed system fallacies. Many cloud-native and microservice based applications suffer from outages even with thorough unit and integration tests applied. This can be because distributed cloud environments are prone to fail in node or even regional level, which cause unexpected behavior in the system when not prepared for. The applications ability to recover and maintain functionality at an acceptable level in these unexpected faults, also known as resilience, should also be tested systematically. In this thesis we give a introduction to the microservice architecture. We inspect an industry case where a leading banking company suffered from issues regarding resiliency. We examine the challenges regarding resilience testing microservice architecture based applications. We compose a small microservice application which we use to study the defensive design patterns and tools and methods available to test microservice architecture resiliency.
  • Eklund, Tommy (2013)
    Large screens, interactive or not, are becoming a common sight at shopping centers and other public places. These screens are used to advertise or share information interactively. Combined with the omnipresence of smartphones this gives rise for a unique opportunity to join these two interfaces and to combine their strengths and complement their weaknesses. Smartphones are very mobile thanks to their small size and can access information virtually from anywhere, but suffer from overflow of information. Users have too many applications and web sites to search relevant information to find what they want or need in a timely fashion. On the other hand, public screens are too large to provide information everywhere or in a personalized way, but they do often have the information you need, when and where you need it. Thus large screens provide an ideal place for users to select content onto their smartphones. Large screens also have the advantage of screen size and research has indicated that using a second screen with small handheld devices can improve the user experience. This thesis undertook design and development of a prototype Android application for existing large interactive public screen. The initial goal was to study the different aspects of personal mobile devices coupled with large public screens. This large screen interface is also under development as a ubiquitous system and the mobile application was designed to be part of this system. Thus the design of the mobile application needed to be consistent with the public screen. During the development of this application it was observed that the small mobile screen could not support the content or interactions designed for a much larger screen because of its small size. As a result this thesis focuses on developing a prototype that further research could draw upon. This lead to a study of small screen graph data visualization and previous research on mobile applications working together with large public screens. This thesis presents a novel approach for displaying graph data designed for large screens on a small mobile screen. This work also discusses many challenges and questions related to large screen interaction with mobile device that rose during the development of the prototype. An evaluation was conducted to gather both quantitative and qualitative data on the interface design and the consistency with the large screen interface to further analyze the resulting prototype. The most important findings in this work are the problems encountered and questions raised during the development of the mobile application prototype. This thesis provides several suggestions for future research using the application, the ubiquitous system and the large screen interface. The study of related work and prototype development also lead to suggestion of design guidelines for this type of applications. The evaluation data also suggests that the final mobile application design is both consistent with and performs better than a faithful implementation of the visuals and interaction model of the original large screen interface.