Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by discipline "Datavetenskap"

Sort by: Order: Results:

  • Tuulos, Natalia (2014)
    Animaatiotekniikat tarjoavat tehokkaita menetelmiä hahmojen ja jäykkien kappaleiden animoimiseen. Tutkielmassa kuvataan yleisemmät 3D-animaatiotekniikat unohtamatta muita edeltäneitä tekniikoita, tutustutaan grafiikkaluikuhihnaan ja luurankomalliin sekä käydään läpi peliresurssien hallintaan, luontiin, käsittelyyn ja talletukseen liittyviä asioita. Lisäksi tutkielmassa tutustutaan animaatiojärjestelmän jäsentelyyn ohjelmistoarkkitehtuurin näkökulmasta sekä esitellään Unreal- ja Unity-pelimoottorit ja niiden animaatiojärjestelmät. Tutkielman konstruktiivisessa osuudessa kuvataan molemmissa pelimoottoreissa toteutetut esimerkkisovellukset ja toteutukseen liittyvät kokemukset sekä tehdään kokemuksiin perustuva vertaileva analyysi.
  • Sairanen, Samuli (2013)
    The term 'botnet' has surfaced in the media in recent years, showing the complications of manipulation of massive amounts of computers. These zombie computers provide platform for many illegal or otherwise shady actions, like spam mailing and denial of service-attacks. The power of such networks is noticeable on global scale, but what is really known about these networks? Why do they exist in the first place? How are botnets being created today and how they work on a technical level? How is the emergence of mobile internet computing affecting botnet creation and what kind of possibilities does it offer? Goal of this work is to illustrate the history of botnets, understand the structure of botnets, how they are built, and how they affect the whole internet culture. Also the methods for fighting against the threat of botnets are speculated.
  • Silvennoinen, Aku (2018)
    De-anonymization is an important requirement in real-world V2X systems (e.g., to enable effective law enforcement). In de-anonymization, a pseudonymous identity is linked to a long-term identity in a process known as pseudonym resolution. For de-anonymization to be acceptable from political, social and legislative points of view, it has to be accountable. A system is accountable if no action by it or using it can be taken without some entity being responsible for the action. Being responsible for an action means that the responsible entity cannot deny its responsibility of or relation to an action afterwards. The main research question is: How can we achieve accountable pseudonym resolution in V2X communication systems? One possible answer is to develop an accountable de-anonymization service, which is compatible with existing V2X pseudonym schemes. The accountability can be achieved by making some entities accountable for the de-anonymization. This thesis proposes a system design that enables, i) fine-grained pseudonym resolution; ii) the possibility to inform the subject of the resolution after a suitable time delay; and iii) the possibility for the public to audit the aggregate number of pseudonym resolutions. A TEE is used to ensure these accountability properties. The security properties of this design are verified using symbolic protocol analysis.
  • Oksanen, Miika (2018)
    In software product line engineering (SPLE), parts of developed software is made variable in order to be able to build a whole range of software products at the same time. This is widely known to have a number of potential benefits such as saving costs when the product line is large enough. However, managing variability in software introduces challenges that are not well addressed by tools used in conventional software engineering, and specialized tools are needed. Research questions: 1) What are the most important requirements for SPLE tools for a small-to-medium sized organisation aiming to experiment with SPLE? 2) How well those requirements are met in two specific SPLE tools, Pure::Variants and Clafer tools? 3) How do the studied tools compare against each other when it comes to their suitability for the chosen context (a digital board game platform)? 4) How common requirements for SPL tools can be generalized to be applicable for both graphical and text-based tools? A list of requirements is first obtained from literature and then used as a basis for an experiment where support for each requirement is tried out with both tools. Then a part of an example product line is developed with both tools and the experiences reported on. Both tools were found to support the list of requirements quite well, although there were some usability problems and not everything could be tested due to technical issues. Based on developing the example, both tools were found to have their own strengths and weaknesses probably partly resulting from one being GUI-based and one textual. ACM Computing Classification System (CCS): (1) CCS → Software and its engineering → Software creation and management → Software development techniques → Reusability → Software product lines (2) CCS → Software and its engineering → Software notations and tools → Software configuration management and version control systems
  • Havukainen, Heikki (2015)
    Managing a telecommunications network requires collecting and processing a large amount of data from the base stations. The current method used by the infrastructure providers is hierarchical and it has significant performance problems. As the amount of traffic within telecommunications networks is expected to continue increasing rapidly in the foreseeable future, these performance problems will become more and more severe. This thesis outlines a distributed publish/subscribe solution that is designed to replace the current method used by the infrastructure providers. In this thesis, we propose an intermediate layer between the base stations and the network management applications which will be built on top of Apache Kafka. The solution will be qualitatively evaluated from different aspects. ACM Computing Classification System (CCS): Networks -> Network management Networks -> Network architectures
  • Hirvikoski, Kasper (2015)
    Software delivery has evolved notably over the years, starting from plan-driven methodologies and lately moving to principles and practises shaped by Agile and Lean ideologies. The emphasis has moved from thoroughly documenting software requirements to a more people-oriented approach of building software in collaboration with users and experimenting with different approaches. Customers are directly integrated into the process. Users cannot always identify software needs before interacting with actual implementations. Building software is not only about building products in the right way, but also about building the right products. Developers need to experiment with different approaches, directly and indirectly. Not only do users value practical software, but the development process must also emphasise on the quality of the product or service. Development processes have formed to support these ideologies. To enable a short feedback-cycle, features are deployed often to production. A software is primarily delivered through a pipeline consisting of tree stages: development, staging and production. Developers develop features by writing code, verify these by writing related tests, interact and test software in a production-like 'staging' environment, and finally deploy features to production. Many practises have formed to support this deployment pipeline, notably Continuous Integration, Deployment and Experimentation. These practises focus on improving the flow of how software is being developed, tested, deployed and experimented with. The Internet has provided a thriving environment for using new practises. Due to the distributed nature of the web, features can be deployed without the need of any interaction from users. Users might not even notice the change. Obviously, there are other environments where many of these practises are much harder to achieve. Embedded systems, which have a dedicated function within a larger mechanical or electrical system, require hardware to accompany the software. Related processes and environments have their limitations. Hardware development can only be iterative to a certain degree. Producing hardware takes up front design and time. Experimentation is more expensive. Many stringent contexts require processes with assurances and transparency - usually provided by documentation and long-testing phases. In this thesis, I explore how advances in streamlining software delivery on the web has influenced the development of embedded systems. I conducted six interviews with people working on embedded systems, to get their view and incite discussion about the development of embedded systems. Though many concerns and obstacles are presented, the field is struggling with the same issues that Agile and Lean development are trying to resolve. Plan-driven approaches are still used, but distinct features of iterative development can be observed. On the leading edge, organisations are actively working on streamlining software and hardware delivery for embedded systems. Many of the advances are based on how Agile and Lean development are being used for user-focused software, particularly on the web.
  • Linkola, Simo (2016)
    A measurement for how similar (or distant) two computer programs are has a wide range of possible applications. For example, they can be applied to malware analysis or analysis of university students' programming exercises. However, as programs may be arbitrarily structured, capturing the similarity of two non-trivial programs is a complex task. By extracting call graphs (graphs of caller-callee relationships of the program's functions, where nodes denote functions and directed edges denote function calls) from the programs, the similarity measurement can be changed into a graph problem. Previously, static call graph distance measures have been largely based on graph matching techniques, e.g. graph edit distance or maximum common subgraph, which are known to be costly. We propose a call graph distance measure based on features that preserve some structural information from the call graph without explicitly matching user defined functions together. We define basic properties of the features, several ways to compute the feature values, and give a basic algorithm for generating the features. We evaluate our features using two small datasets: a dataset of malware variants, and a dataset of university students' programming exercises, focusing especially on the former. For our evaluation we use experiments in information retrieval and clustering. We compare our results for both datasets to a baseline, and additionally for the malware dataset to the results obtained with a graph edit distance approximation. In our preliminary results we show that even though the feature generation approach is simpler than the graph edit distance approximation, the generated features can perform on a similar level as the graph edit distance approximation. However, experiments on larger datasets are still required to verify the results.
  • Ruonala, Henna-Riikka (2017)
    A systematic literature review was conducted to examine the usage of agile methods in game development. A total of 23 articles were found which were analysed with the help of concept matrices. The results indicate that agile methods are used to varying degrees in game development. Agile methods lead to improved quality of games through a prototyping, playtesting, and feedback loop. Communication and ability of the team to take responsibility are also enhanced. Challenges arise from multidisciplinary teams, management issues, lack of training in agile methods, and quality of code.
  • Aintila, Eeva Katri Johanna (2016)
    Expected benefits from agile methodologies to project success have encouraged organizations to extend agile approaches to areas they were not originally intended to such as large scale information systems projects. Research regarding agile methods in large scale software development projects have existed for few years and it is considered as its own research area. This study investigates agile methods on the large scale software development and information systems projects and its goal is to produce more understanding of agile methods suitability and the conditions under which they would most likely contribute to project success. The goal is specified with three research questions; I) what are the characteristics specific to large scale software engineering projects or large scale Information Systems project, II) what are the challenges caused by these characteristics and III) how agile methodologies mitigate these challenges? In this study resent research papers related to the subject are investigated and characteristics of large scale projects and challenges associated to them are recognized. Material of the topic was searched starting from the conference publications and distributions sites related to the subject. Collected information is supplemented with the analysis of project characteristics against SWEBOK knowledge areas. Resulting challenge categories are mapped against agile practises promoted by Agile Alliance to conclude the impact of practises to the challenges. Study is not a systematics literature review. As a result 6 characteristics specific to large scale software development and IS projects and 10 challenge categories associated to these characteristics are recognized. The analysis reveals that agile practises enhance the team level performance and provide direct practises to manage challenges associated to high amount of changes and unpredictability of software process both characteristic to a large scale IS project but challenges still remain on the cross team and overall project level. As a conclusion it is stated that when seeking the process model with agile approach which would respond to all the characteristics of large scale project thus adding the likelihood of project success adaptations of current practises and development of additional practises are needed. To contribute this four areas for adaptations and additional practises are suggested when scaling agile methodologies over large scale project contexts; 1) adaptation of practises related to distribution, assignment and follow up of tasks, 2) alignment of practises related to software development process, ways of working and common principles over all teams, 3) developing additional practises to facilitate collaboration between teams, to ensure interactions with the cross functional project dimensions and to strengthen the dependency management and decision making between all project dimensions and 4) possibly developing and aligning practises to facilitate teams' external communication. Results of the study are expected to be useful for software development and IS project practitioners when considering agile method adoptions or adaptations in a large scale project context. ACM Computing Classification System (CCS) 2012: - Social and professional topics~Management of computing and information systems - Software and its engineering~Software creation and management
  • Kallonen, Susanna (2013)
    Aivokäyttöliittymätutkimus on nuori, poikkitieteellinen tutkimusala, jonka pyrkimyksenä on kehittää ajatuksen voimalla toimivia käyttöliittymiä lääketieteellisistä häiriöistä kärsiville apu- ja kuntoutusvälineiksi sekä terveille ihmisille viihde- ja hyötykäyttöön. Aivokäyttöliittymät mahdollistavat ihmisen aivojen ja tietokoneen välille uudenlaisen, suoran viestinvälitysyhteyden, joka ei ole riippuvainen ääreishermostosta ja lihaksista. Tässä tutkielmassa kartoitetaan aivokäyttöliittymien aihealueesta tehtyä tutkimusta sekä perehdytään aivokäyttöliittymien sovellusalueisiin ja toteutusperiaatteisiin. Aivokäyttöliittymillä pystytään jo nykyään parantamaan vaikeasti liikuntakyvyttömien ihmisten elämänlaatua tarjoamalla heille tavan kommunikoida ympäristönsä kanssa. Aivokäyttöliittymän avulla he pystyvät kirjoittamaan virtuaalisella tietokoneen näppäimistöllä pelkästään ajatuksen voimalla. Tekniikan hyödyntämistä raajaproteesien liikuttamiseen, pyörätuolin ohjaamiseen, epilepsian oireiden lievittämiseen, tietokonepelien pelaamiseen ja lukuisiin muihin käytännön sovelluksiin tutkitaan parhaillaan. Aivokäyttöliittymien toiminnan perustana voi olla invasiivinen mittaustekniikka, jossa aivojen toimintaa mitataan kallon sisältä, tai ei-invasiivinen mittaustekniikka, jossa mittaus tehdään päänahan ulkopuolelta. Tutkielmassa selviää, että sekä invasiivisella että ei-invasiivisella tekniikalla pystytään toteuttamaan toimivia aivokäyttöliittymiä. Invasiiviset menetelmät soveltuvat parhaiten sovelluksiin, joiden toiminta vaatii hyvää signaalin tarkkuutta ja joiden kohderyhmänä ovat sairaat tai vammautuneet henkilöt. Ei-invasiiviset menetelmät sopivat sovelluksiin, joissa pienempi mittaustarkkuus riittää tai joita käyttävät myös terveet henkilöt. Tutkielmassa todetaan, että aivokäyttöliittymät soveltuvat sekä terveille ihmisille että erilaisista lääketieteellisistä häiriöistä kärsiville. Lisäksi otetaan kantaa siihen, minkälaisia aivokäyttöliittymäsovelluksia kannattaa kehittää perustaen käsitys esiteltyyn tutkimustietoon. Tätä tulosta verrataan haastatteluun, jossa kartoitetaan aivokäyttöliittymien kohderyhmään kuuluvan henkilön ajatuksia aivokäyttöliittymistä, niiden sovelluskohteista ja niille asetettavista vaatimuksista. Haastattelun tuloksena löydetään viisi uutta, aiemmin tutkimatonta, aivokäyttöliittymien sovelluskohdetta: nielun puhdistamiseen tarkoitettu limaimuri, kirjoitetun tekstin ääneen lausuva puhesyntetisaattori, nostolaite jolla henkilö voi nostaa itsensä sängystä, pesun suorittava WC-istuin ja monitoiminen, ruokailussa ja asennon vaihtamisessa avustava laite. Lisäksi tunnistetaan kaksi uutta vaatimusta aivokäyttöliittymille: tarve huomioida sovellusten helppokäyttöisyys avustajien näkökulmasta ja vaatimus aivokäyttöliittymien joustavuudesta eli siitä, että yhdellä aivokäyttöliittymällä pystyisi suorittamaan monia erilaisia toimintoja. Haastattelun perusteella vahvistuu käsitys siitä, että loppukäyttäjät kannattaa ottaa mukaan aivokäyttöliittymien kehitystyöhön ja näkökulmaksi tulisi ottaa entistä enemmän käyttäjälähtöisyys, joka nykyisin ei ole ollut tutkimusten lähtökohtana.
  • Hietasaari, Antti (2016)
    Tutkielmassa arvioidaan aktoripohjaisten rinnakkaistamisratkaisujen soveltuvuutta pelimoottoreihin. Tutkielmassa esitellään ensin pelimoottoreiden ja aktoripohjaisen rinnakkaisuuden perusperiaatteet ja sitten aktoripohjainen Stage-pelimoottoritoteutus. Tutkielman lopuksi tutkitaan Stage-moottorin tehokkuutta ja helppokäyttöisyyttä verrattuna perinteisiä lukkopohjaisia rinnakkaistamisratkaisuja hyödyntävään pelimoottoriin.
  • Heinonen, Kenny (2015)
    Opetuksessa käytetyt materiaalit ovat sisällöltään tyypillisesti muuttumattomia. Kirjat ja kuvat ovat esimerkkejä muuttumattomista opetusmateriaaleista. Muuttumattoman opetusmateriaalin tueksi on kehitetty älykkäitä oppimisjärjestelmiä, jotka ovat yleistyneet tietojenkäsittelytieteen opetuksessa. Älykkään oppimisjärjestelmän ominaispiirteisiin kuuluu dynaamisuus ja interaktiivisuus. Järjestelmän tarjoamien interaktiivisten mekanismien ansiosta käyttäjä voi kommunikoida järjestelmän kanssa ja oppia järjestelmän opettamaa aihetta. Oppimisjärjestelmän tarjoama sisältö ja käyttäjälle annettu palaute vaihtelee käyttäjän syötteen perusteella. Älykkäillä oppimisjärjestelmillä on useita luokituksia, joihin kuuluu niin visualisointi- ja simulointijärjestelmiä, arviointi- ja tuutorointijärjestelmiä, ohjelmointiympäristöjä ja oppimispelejä. Tässä tutkielmassa tarkastellaan millaisia älykkäitä oppimisympäristöjä on olemassa ja mikä niiden käyttötarkoitus on. Järjestelmät kartoitetaan etsimällä niitä SIGCSE konferenssissa julkaistuista artikkeleista vuosilta 2009–2014. Kartoitettujen järjestelmien teknistä toteutusta tutkitaan muutamien yleisluontoisten ominaisuuksien, kuten web-pohjaisuuden, näkökulmasta. Viiden vuoden aikavälillä järjestelmät eivät ole yleisesti ottaen kehittyneet. Älykkäät oppimisjärjestelmät ovat laajasti hyödynnettyjä, mutta ne eivät kuitenkaan ole korvaamassa perinteistä lähiopetusta, vaan ne ovat tarkoitettu enimmäkseen lähiopetuksen tueksi.
  • Pennanen, Teppo (2015)
    This thesis is a study on Lean Startup metrics. It attempts to answer what is measured and how in Lean Startup, how it differs from other software measurement and what information needs today's startups have. This study has literature study and an empirical survey using real start-ups. This study explains how the software measurement has changed over the years and what kind of metrics Lean Startup suggests to be used and why. It shows differences in measurement use between traditional start-ups and Lean Startups. This study suggest reasons and motivations to use measurement in start-ups and gives examples when not to. In the scope of this study a survey with questionnaires and interviews was conducted. It showed distinctly different attitudes towards measurement between traditional start-up entrepreneurs and those who like to call themselves Lean Startup entrepreneurs. Measurement in Lean Startup is not an end in itself, but a useful tool for gaining feedback for the gut-feelings of an entrepreneur. Metrics, when meaningful and correct, communicate the focus within a start-up and will objectively evaluate the business' success.
  • Islam, Hasan Mahmood Aminul (2013)
    The Web has introduced a new technology in a more distributed and collaborative form of communication, where the browser and the user replace the web server as the nexus of communications in a way that after the call establishment through web servers, the communication is performed directly between browsers as peer to peer fashion without intervention of the web servers. The goal of Real Time Collaboration on the World Wide Web (RTCWeb) project is to allow browsers to natively support voice, video, and gaming in interactive peer to peer communications and real time data collaboration. Several transport protocols such as TCP, UDP, RTP, SRTP, SCTP, DCCP presently exist for communication of media and non-media data. However, a single protocol alone can not meet all the requirements of RTCWeb. Moreover, the deployment of a new transport protocol experiences problems traversing middle boxes such as Network Address Translation (NAT) box, firewall. Nevertheless, the current implementation for transportation of non-media in the very first versions of RTCWeb data does not include any congestion control on the end-points. With media (i.e., audio, video) the amount of traffic can be determined and limited by the codec and profile used during communication, whereas RTCWeb user could generate as much as non-media data to create congestion on the networks. Therefore, a suitable transport protocol stack is required that will provide congestion control, NAT traversal solution, and authentication, integrity, and privacy of user data. This master's thesis will give emphasis on the analysis of transport protocol stack for data channel in RTCWeb and selects Stream Control Transmission Protocol (SCTP), which is a reliable, message oriented general-purpose transport layer protocol, operating on top of both IPv4 and IPv6, providing congestion control similar to TCP and additionally, some new functionalities regarding security, multihoming, multistreaming, mobility, and partial reliability. However, due to the lack of universal availability of SCTP within the OS(s), it has been decided to use the SCTP userland implementation. WebKit is an open source web browser engine for rendering web pages used by Safari, Dashboard, Mail, and many other OS X applications. In WebKit RTCWeb implementation using GStreamer multimedia framework, RTP/UDP is utilized for the communication of media data and UDP tunnelling for non-media data. Therefore, in order to allow a smooth integration of the implementation within WebKit, we have decided to implement GStreamer plugins using SCTP userland stack. This thesis work also investigates the way Mozilla has integrated those protocols in the browser's network stack and how the Data Channel has been designed and implemented using SCTP userland stack.
  • Peltonen, Ella (2013)
    Cloud computing offers important resources, performance, and services nowadays when it has became popular to collect, store and analyze large data sets. This thesis builds on Berkeley Data Analysis Stack (BDAS) as a cloud computing environment designed for Big Data handling and analysis. Especially two parts of the BDAS, the cluster resource manager Mesos and the distribution manager Spark will be introduced. They offer important features, such as efficiency, multi-tenancy, and fault tolerance, for cloud computing. The Spark system expands MapReduce, the well-known cloud computing paradigm. Machine learning algorithms can predict trends and anomalies of large data sets. This thesis will present one of them, a distributed decision tree algorithm, implemented on the Spark system. As an example case, the decision tree will be used on the versatile energy consumption data from mobile devices, such as smart phones and tablets, of the Carat project. The data consists of information about the usage of the device, such as which applications have been running, network connections, battery temperatures, and screen brightness, for example. The decision tree aims to find chains of data features that might lead to energy consumption anomalies. Results of the analysis can be used to advise users on how to improve their battery life. This thesis will present selected analysis results together with advantages and disadvantages of the decision tree analysis.
  • Niemistö, Juho (2014)
    Googlen kehittämä Android on noussut viime vuosina markkinaosuudeltaan suurimmaksi mobiililaitteiden käyttöjärjestelmäksi. Kuka tahansa voi kehittää Androidille sovelluksia, joiden kehittämiseen tarvittavat välineet ovat ilmaiseksi saatavilla. Erilaisia sovelluksia onkin kehitetty jo yli miljoona. Sovellusten laatu on erityisen tärkeää Android-alustalla, jossa kilpailua on runsaasti ja sovellusten hinta niin alhainen, ettei se muodosta estettä sovelluksen vaihtamiselle toiseen. Sovelluskauppa on myös aina saatavilla suoraan laitteesta. Tämä asettaa sovellusten testaamisellekin haasteita. Toisaalta sovellukset tulisi saada nopeasti sovelluskauppaan, mutta myös sovellusten laadun pitäisi olla hyvä. Testityökalujen pitäisi siis olla helppokäyttöisiä, ja tehokkaita. Androidille onkin kehitetty lukuisia testaustyökaluja Googlen omien työkalujen lisäksi. Tässä tutkielmassa tutustutaan Android-sovellusten rakenteeseen, niiden testaamiseen ja Android-alustalla toimiviin automaattisen testauksen työkaluihin. Erityisesti keskitytään yksikkö- ja toiminnallisen testauksen työkaluihin. Yksikkötestityökaluista vertaillaan Androidin omaa yksikkötestikehystä Robolectriciin. Toiminnallisen testauksen työkaluista vertaillaan Uiautomatoria, Robotiumia ja Troydia.
  • Pulliainen, Laur (2018)
    Software defect prediction is the process of improving software testing process by identifying defects in the software. It is accomplished by using supervised machine learning with software metrics and defect data as variables. While the theory behind software defect prediction has been validated in previous studies, it has not widely been implemented into practice. In this thesis, a software defect prediction framework is implemented for improving testing process resource allocation and software release time optimization at RELEX Solutions. For this purpose, code and change metrics are collected from RELEX software. The used metrics are selected with the criteria of their frequency of usage in other software defect prediction studies, and availability of the metric in metric collection tools. In addition to metric data, defect data is collected from issue tracker. Then, a framework for classifying the collected data is implemented and experimented on. The framework leverages existing machine learning algorithm libraries to provide classification functionality, using classifiers which are found to perform well in similar software defect prediction experiments. The results from classification are validated utilizing commonly used classifier performance metrics, in addition to which the suitability of the predictions is verified from a use case point of view. It is found that software defect prediction does work in practice, with the implementation achieving comparable results to other similar studies when measuring by classifier performance metrics. When validating against the defined use cases, the performance is found acceptable, however the performance varies between different data sets. It is thus concluded that while results are tentatively positive, further monitoring with future software versions is needed to verify performance and reliability of the framework.
  • Enberg, Pekka (2016)
    Hypervisors and containers are the two main virtualization techniques that enable cloud computing. Both techniques have performance overheads on CPU, memory, networking, and disk performance compared to bare metal. Unikernels have recently been proposed as an optimization for hypervisor-based virtualization to reduce performance overheads. In this thesis, we evaluate network I/O performance overheads for hypervisor-based virtualization using Kernel-based Virtual Machine (KVM) and the OSv unikernel and for container-based virtualization using Docker comparing the different configurations and optimizations. We measure the raw networking latency and throughput and CPU utilization by using the Netperf benchmarking tool and measure network intensive application performance using the Memcached key-value store and the Mutilate benchmarking tool. We show that compared to bare metal Linux, Docker with bridged networking has the least performance overhead with OSv using vhost-net coming a close second.
  • Ibbad, Hafeez (2016)
    The number of devices connected to the Internet is growing exponentially. These devices include smartphones, tablets, workstations and Internet of Things devices, which offer a number of cost and time savings by automating routine tasks for the users. However, these devices also introduce a number of security and privacy concerns for the users. These devices are connected to small office/home-office (SOHO) and enterprise networks, where users have very little to no information about threats associated to these devices and how these devices can be managed properly to ensure user's privacy and data security. We proposed a new platform to automate the security and management of the networks providing connectivity to billions of connected devices. Our platform is low cost, scalable and easy to deploy system, which provides network security and management features as a service. It is consisted of two main components i.e. Securebox and Security and Management Service (SMS). Securebox is a newly designed Openflow enabled gateway residing in edge networks and is responsible for enforcing the security and management decisions provided by SMS. SMS runs a number of traffic analysis services to analyze user traffic on demand for Botnet, Spamnet, malware detection. SMS also supports to deploy on demand software based middleboxes for on demand analysis of user traffic in isolated environment. It handles the configuration update, load balancing and scalability of these middlebox deployments as well. In contrast to current state of the art, the proposed platform offloads the security and management tasks to an external entity, providing a number of advantages in terms of deployment, management, configuration updates and device security. We have tested this platform in real world scenarios. Evaluation results show that the platform can be efficiently deployed in traditional networks in an incremental manner. It also allows us to achieve similar user experience with security features embedded in the connectivity.
  • Schneider, Jenna (2017)
    Missing user needs and requirements often lead to sub-optimal software products that users find difficult to use. Software development approaches that follow the user-centered design paradigm try to overcome this problem by focusing on the needs and goals of end users throughout the process life cycle. The purpose of this thesis is to examine how three different user-centered design methodologies approach the development of user requirements and determine whether the ideal requirements development processes described in these methodologies are applicable to the practice of the software industry. The results of this investigation are finally used as a guideline for defining a high-level requirements development framework for the IT department of a large multinational company.