Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by department "Tietojenkäsittelytieteen laitos"

Sort by: Order: Results:

  • Asena, Dawit (2017)
    In health-care area, computing based solutions are frequently sought for efficient management and operation of hospitals or independent surgeries. The overall surgery scheduling plays an important role in matching of the supply and the demand for the services of a surgery. Internal competition on resources makes a surgery scheduling challenging. A surgery scheduler must consider uncertainty in the duration of each surgery and other critical activities such as the arrival of unexpected urgent patients and cancellations on the day of surgery while efficiently managing a variety of human and technical resources. Surgery schedules must balance the surgery requests and the available resources and at the same time accomplish efficient service to patients. This thesis presents a dynamic surgery scheduling based on the idea of a Virtual Organization creation. Virtual organization (VO) is a collaboration partnership model among small businesses to share resources and to be competitive with larger businesses. Dynamic scheduling of a surgery requires the collaboration of various participant to overcome challenges of coordination. The effectiveness of a surgery schedule can be monitored from the surgeries. Moreover, the inadequacies of the schedule can be detected beforehand and the necessary measures can be taken to avoid waste of resources and cancellation of surgeries due to overbooking. The VO based model is more rigorous than is usually presented in research literature. In this thesis a conceptual model is designed to show the interaction of partners. The model is validated using sample populated data. The roles and dependencies among entities are described in the model that uses fact-oriented approach. Object Role Modeling (ORM) is used to show the interaction of entities and their role. And NORMA is used as a development tool. The validation using fact and sample populated data determine the validity of the conceptual model and demonstrate that the collaboration partners benefit from the coordinated surgery schedule.
  • Haris, Muhammad (2018)
    The number of applications that are used for voice and video calls is growing day by day. The basic purpose of all of these applications is to provide better call services to their customers. These applications differ in their underlying communication protocols and encoding techniques. These variations in protocols are introduced to enhance the user experience, improve the performance of the VoIP applications and minimize the network delay between the end users. Thus, to make the call quality better, there is a lot of research done to analyze and detect VoIP traffic. There are two basic goals of this thesis: 1) Analysis of the VoIP traffic of five famous VoIP applications (Skype, Viber, WhatsApp, Facebook messenger, and IMO). 2) Classification of VoIP traffic based on the analysis in the first step. We adopted flow-based analysis technique for the analysis and classification of VoIP traffic. For the first step, we analyzed the three flow features (packet rate, Inter-packet gap, and packet size) of the above-chosen VoIP applications. In addition, we also presented a detailed explanation of the three factors (the access network, the underlying operating system, and the geographic distance between the caller and the callee) which can affect distribution of the flows features. To realize our second goal, we present an analysis algorithm that uses moving weighted average (EWMA) and standard deviation (EWMSD). Our analysis algorithm is based on the statistical distribution of EWMA/EWMSD for each of the selected flow features mentioned above. The advantage of using EWMA/EWMSD statistics for classification is that they are not dependent on network protocols. Thus, a generic classification approach can be applied to all of the three flow features described above. Finally, based on our analysis, we introduce a classifier algorithm and its variants to classify a VoIP call as voice or video call. The classifier algorithm monitors EWMA/EWMSD statistics for the flow features and classifies the content of VoIP call accordingly. Our results explain two important things: 1) Each of the above-mentioned flow features can be used with EWMA/EWMSD statistics for VoIP traffic classification. 2) The use of Inter- packet gap feature with EWMA/EWMSD statistics gives highly accurate results as compared to the Packet size and Packet rate.
  • Järvinen, Ilpo (Helsingin yliopistoUniversity of HelsinkiHelsingfors universitet, 2006)
    Wireless access is expected to play a crucial role in the future of the Internet. The demands of the wireless environment are not always compatible with the assumptions that were made on the era of the wired links. At the same time, new services that take advantage of the advances in many areas of technology are invented. These services include delivery of mass media like television and radio, Internet phone calls, and video conferencing. The network must be able to deliver these services with acceptable performance and quality to the end user. This thesis presents an experimental study to measure the performance of bulk data TCP transfers, streaming audio flows, and HTTP transfers which compete the limited bandwidth of the GPRS/UMTS-like wireless link. The wireless link characteristics are modeled with a wireless network emulator. We analyze how different competing workload types behave with regular TPC and how the active queue management, the Differentiated services (DiffServ), and a combination of TCP enhancements affect the performance and the quality of service. We test on four link types including an error-free link and the links with different Automatic Repeat reQuest (ARQ) persistency. The analysis consists of comparing the resulting performance in different configurations based on defined metrics. We observed that DiffServ and Random Early Detection (RED) with Explicit Congestion Notification (ECN) are useful, and in some conditions necessary, for quality of service and fairness because a long queuing delay and congestion related packet losses cause problems without DiffServ and RED. However, we observed situations, where there is still room for significant improvements if the link-level is aware of the quality of service. Only very error-prone link diminishes the benefits to nil. The combination of TCP enhancements improves performance. These include initial window of four, Control Block Interdependence (CBI) and Forward RTO recovery (F-RTO). The initial window of four helps a later starting TCP flow to start faster but generates congestion under some conditions. CBI prevents slow-start overshoot and balances slow start in the presence of error drops, and F-RTO reduces unnecessary retransmissions successfully.
  • Pennanen, Teppo (2015)
    This thesis is a study on Lean Startup metrics. It attempts to answer what is measured and how in Lean Startup, how it differs from other software measurement and what information needs today's startups have. This study has literature study and an empirical survey using real start-ups. This study explains how the software measurement has changed over the years and what kind of metrics Lean Startup suggests to be used and why. It shows differences in measurement use between traditional start-ups and Lean Startups. This study suggest reasons and motivations to use measurement in start-ups and gives examples when not to. In the scope of this study a survey with questionnaires and interviews was conducted. It showed distinctly different attitudes towards measurement between traditional start-up entrepreneurs and those who like to call themselves Lean Startup entrepreneurs. Measurement in Lean Startup is not an end in itself, but a useful tool for gaining feedback for the gut-feelings of an entrepreneur. Metrics, when meaningful and correct, communicate the focus within a start-up and will objectively evaluate the business' success.
  • Islam, Hasan Mahmood Aminul (2013)
    The Web has introduced a new technology in a more distributed and collaborative form of communication, where the browser and the user replace the web server as the nexus of communications in a way that after the call establishment through web servers, the communication is performed directly between browsers as peer to peer fashion without intervention of the web servers. The goal of Real Time Collaboration on the World Wide Web (RTCWeb) project is to allow browsers to natively support voice, video, and gaming in interactive peer to peer communications and real time data collaboration. Several transport protocols such as TCP, UDP, RTP, SRTP, SCTP, DCCP presently exist for communication of media and non-media data. However, a single protocol alone can not meet all the requirements of RTCWeb. Moreover, the deployment of a new transport protocol experiences problems traversing middle boxes such as Network Address Translation (NAT) box, firewall. Nevertheless, the current implementation for transportation of non-media in the very first versions of RTCWeb data does not include any congestion control on the end-points. With media (i.e., audio, video) the amount of traffic can be determined and limited by the codec and profile used during communication, whereas RTCWeb user could generate as much as non-media data to create congestion on the networks. Therefore, a suitable transport protocol stack is required that will provide congestion control, NAT traversal solution, and authentication, integrity, and privacy of user data. This master's thesis will give emphasis on the analysis of transport protocol stack for data channel in RTCWeb and selects Stream Control Transmission Protocol (SCTP), which is a reliable, message oriented general-purpose transport layer protocol, operating on top of both IPv4 and IPv6, providing congestion control similar to TCP and additionally, some new functionalities regarding security, multihoming, multistreaming, mobility, and partial reliability. However, due to the lack of universal availability of SCTP within the OS(s), it has been decided to use the SCTP userland implementation. WebKit is an open source web browser engine for rendering web pages used by Safari, Dashboard, Mail, and many other OS X applications. In WebKit RTCWeb implementation using GStreamer multimedia framework, RTP/UDP is utilized for the communication of media data and UDP tunnelling for non-media data. Therefore, in order to allow a smooth integration of the implementation within WebKit, we have decided to implement GStreamer plugins using SCTP userland stack. This thesis work also investigates the way Mozilla has integrated those protocols in the browser's network stack and how the Data Channel has been designed and implemented using SCTP userland stack.
  • Zuniga Corrales, Wladimir Agustin (2018)
    The non-stopping expansion of mobile technologies has produced the swift increase of smartphones with higher computational power, and sophisticated sensing and communication capabilities have provided the foundations to develop apps on the move with PC-like functionality. Indeed, nowadays apps are almost everywhere, and their number has increased exponentially with Apple AppStore, Google Play and other mobile app marketplaces offering millions of apps to users. In this scenario, it is common to find several apps providing similar functionalities to users. However, only a fraction of these applications has a long-term survival rate in app stores. Retention is a metric widely used to quantify the lifespan of mobile apps. Higher app retention corresponds to higher adoption and level of engagement. While existing scientific studies have analysed mobile users' behaviour and support the existence of factors that influence apps retention, the quantification about how do these factors affect long-term usage is still missing. In this thesis, we contribute to these studies quantifying and modelling one of the critical factors that affect app retention: performance. We deepen the analysis of performance based on two key-related variables: network connectivity and battery consumption. The analysis is performed by combining two large-scale crowdsensed datasets. The first includes measurements about network quality and the second about app usage and energy consumption. Our results show the benefits of data fusion to introduce richer contexts impossible of being discovered when analysing data sources individually. We also demonstrate that, indeed, high variations of these variables together and individually affect the likelihood of long-term app usage. But also, that retention is regulated by what users consider reasonable standards of performance, meaning that the improvement of latency and energy consumption does not guarantee higher retention. To provide further insights, we develop a model to predict retention using performance-related variables. Its accuracy in the results allows generalising the effect of performance in long-term usage across categories, locations and moderating variables.
  • Peltonen, Ella (2013)
    Cloud computing offers important resources, performance, and services nowadays when it has became popular to collect, store and analyze large data sets. This thesis builds on Berkeley Data Analysis Stack (BDAS) as a cloud computing environment designed for Big Data handling and analysis. Especially two parts of the BDAS, the cluster resource manager Mesos and the distribution manager Spark will be introduced. They offer important features, such as efficiency, multi-tenancy, and fault tolerance, for cloud computing. The Spark system expands MapReduce, the well-known cloud computing paradigm. Machine learning algorithms can predict trends and anomalies of large data sets. This thesis will present one of them, a distributed decision tree algorithm, implemented on the Spark system. As an example case, the decision tree will be used on the versatile energy consumption data from mobile devices, such as smart phones and tablets, of the Carat project. The data consists of information about the usage of the device, such as which applications have been running, network connections, battery temperatures, and screen brightness, for example. The decision tree aims to find chains of data features that might lead to energy consumption anomalies. Results of the analysis can be used to advise users on how to improve their battery life. This thesis will present selected analysis results together with advantages and disadvantages of the decision tree analysis.
  • Niemistö, Juho (2014)
    Googlen kehittämä Android on noussut viime vuosina markkinaosuudeltaan suurimmaksi mobiililaitteiden käyttöjärjestelmäksi. Kuka tahansa voi kehittää Androidille sovelluksia, joiden kehittämiseen tarvittavat välineet ovat ilmaiseksi saatavilla. Erilaisia sovelluksia onkin kehitetty jo yli miljoona. Sovellusten laatu on erityisen tärkeää Android-alustalla, jossa kilpailua on runsaasti ja sovellusten hinta niin alhainen, ettei se muodosta estettä sovelluksen vaihtamiselle toiseen. Sovelluskauppa on myös aina saatavilla suoraan laitteesta. Tämä asettaa sovellusten testaamisellekin haasteita. Toisaalta sovellukset tulisi saada nopeasti sovelluskauppaan, mutta myös sovellusten laadun pitäisi olla hyvä. Testityökalujen pitäisi siis olla helppokäyttöisiä, ja tehokkaita. Androidille onkin kehitetty lukuisia testaustyökaluja Googlen omien työkalujen lisäksi. Tässä tutkielmassa tutustutaan Android-sovellusten rakenteeseen, niiden testaamiseen ja Android-alustalla toimiviin automaattisen testauksen työkaluihin. Erityisesti keskitytään yksikkö- ja toiminnallisen testauksen työkaluihin. Yksikkötestityökaluista vertaillaan Androidin omaa yksikkötestikehystä Robolectriciin. Toiminnallisen testauksen työkaluista vertaillaan Uiautomatoria, Robotiumia ja Troydia.
  • Boström, Fredrik (Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2008)
    Portable music players have made it possible to listen to a personal collection of music in almost every situation, and they are often used during some activity to provide a stimulating audio environment. Studies have demonstrated the effects of music on the human body and mind, indicating that selecting music according to situation can, besides making the situation more enjoyable, also make humans perform better. For example, music can boost performance during physical exercises, alleviate stress and positively affect learning. We believe that people intuitively select different types of music for different situations. Based on this hypothesis, we propose a portable music player, AndroMedia, designed to provide personalised music recommendations using the user's current context and listening habits together with other user's situational listening patterns. We have developed a prototype that consists of a central server and a PDA client. The client uses Bluetooth sensors to acquire context information and logs user interaction to infer implicit user feedback. The user interface also allows the user to give explicit feedback. Large user interface elements facilitate touch-based usage in busy environments. The prototype provides the necessary framework for using the collected information together with other user's listening history in a context- enhanced collaborative filtering algorithm to generate context-sensitive recommendations. The current implementation is limited to using traditional collaborative filtering algorithms. We outline the techniques required to create context-aware recommendations and present a survey on mobile context-aware music recommenders found in literature. As opposed to the explored systems, AndroMedia utilises other users' listening habits when suggesting tunes, and does not require any laborious set up processes.
  • Pulliainen, Laur (2018)
    Software defect prediction is the process of improving software testing process by identifying defects in the software. It is accomplished by using supervised machine learning with software metrics and defect data as variables. While the theory behind software defect prediction has been validated in previous studies, it has not widely been implemented into practice. In this thesis, a software defect prediction framework is implemented for improving testing process resource allocation and software release time optimization at RELEX Solutions. For this purpose, code and change metrics are collected from RELEX software. The used metrics are selected with the criteria of their frequency of usage in other software defect prediction studies, and availability of the metric in metric collection tools. In addition to metric data, defect data is collected from issue tracker. Then, a framework for classifying the collected data is implemented and experimented on. The framework leverages existing machine learning algorithm libraries to provide classification functionality, using classifiers which are found to perform well in similar software defect prediction experiments. The results from classification are validated utilizing commonly used classifier performance metrics, in addition to which the suitability of the predictions is verified from a use case point of view. It is found that software defect prediction does work in practice, with the implementation achieving comparable results to other similar studies when measuring by classifier performance metrics. When validating against the defined use cases, the performance is found acceptable, however the performance varies between different data sets. It is thus concluded that while results are tentatively positive, further monitoring with future software versions is needed to verify performance and reliability of the framework.
  • Liu, Yanhe (2015)
    Cellular networks are facing a data explosion posed by the increasing bandwidth demand of current mobile applications, and cellular operators are trying to leverage auxiliary networks and offload mobile data for relieving this challenge. However, traffic offloading without comprehensive controlling may result poor network utilization and undesirable user experience. In this thesis, we design and implement an integrated architecture for intelligent traffic offloading over collaborative WiFi-cellular networks. Motivated by our measurement, we formulate a mathematical model to estimate and evaluate potential offloading throughput based on various wireless context information, like AP signal strength and bandwidth. To efficiently manage traffic and collect information, we use a centralized SDN architecture in our design. The proposed system enables mobile devices to choose the most beneficial AP for offloading. The experimental evaluation of our prototype implementation demonstrates that this architecture can achieve optimal traffic offloading by considering different real factors instead of making naive decisions. This effort not only explores the feasibility of context-based traffic offloading, but also provides guidelines for designing and implementing a centralized SDN platform for wireless networks.
  • Holmberg, Robert (Helsingin yliopistoUniversity of HelsinkiHelsingfors universitet, 2006)
    Ett sätt att förbättra resultat i informationssökning är frågeutvidgning. Vid frågeutvidgning utökas användarens ursprungliga fråga med termer som berör samma ämne. Frågor som har stort likhetsvärde med ett dokument kan tänkas beskriva dokumentet väl och kan därför fungera som en källa för goda utvidgningstermer. Om tidigare frågor finns lagrade kan termer som hittas med hjälp av dessa användas som kandidater för frågeutvidgningstermer. I avhandlingen presenteras och jämförs tre metoder för användning av tidigare frågor vid frågeutvidgning. För att evaluera metodernas effektivitet, jämförs de med hjälp av sökmaskinen Lucene och en liten samling dokument som berör cancerforskning. Som jämförelseresultat används de omodifierade frågorna och en enkel pseudorelevansåterkopplingsmetod som inte använder sig av tidigare frågor. Ingen av frågeutvidgningsmetoderna klarade sig speciellt bra, vilket beror på att dokumentsamlingen och testfrågorna utgör en svår omgivning för denna typ av metoder.
  • Enberg, Pekka (2016)
    Hypervisors and containers are the two main virtualization techniques that enable cloud computing. Both techniques have performance overheads on CPU, memory, networking, and disk performance compared to bare metal. Unikernels have recently been proposed as an optimization for hypervisor-based virtualization to reduce performance overheads. In this thesis, we evaluate network I/O performance overheads for hypervisor-based virtualization using Kernel-based Virtual Machine (KVM) and the OSv unikernel and for container-based virtualization using Docker comparing the different configurations and optimizations. We measure the raw networking latency and throughput and CPU utilization by using the Netperf benchmarking tool and measure network intensive application performance using the Memcached key-value store and the Mutilate benchmarking tool. We show that compared to bare metal Linux, Docker with bridged networking has the least performance overhead with OSv using vhost-net coming a close second.
  • Ibbad, Hafeez (2016)
    The number of devices connected to the Internet is growing exponentially. These devices include smartphones, tablets, workstations and Internet of Things devices, which offer a number of cost and time savings by automating routine tasks for the users. However, these devices also introduce a number of security and privacy concerns for the users. These devices are connected to small office/home-office (SOHO) and enterprise networks, where users have very little to no information about threats associated to these devices and how these devices can be managed properly to ensure user's privacy and data security. We proposed a new platform to automate the security and management of the networks providing connectivity to billions of connected devices. Our platform is low cost, scalable and easy to deploy system, which provides network security and management features as a service. It is consisted of two main components i.e. Securebox and Security and Management Service (SMS). Securebox is a newly designed Openflow enabled gateway residing in edge networks and is responsible for enforcing the security and management decisions provided by SMS. SMS runs a number of traffic analysis services to analyze user traffic on demand for Botnet, Spamnet, malware detection. SMS also supports to deploy on demand software based middleboxes for on demand analysis of user traffic in isolated environment. It handles the configuration update, load balancing and scalability of these middlebox deployments as well. In contrast to current state of the art, the proposed platform offloads the security and management tasks to an external entity, providing a number of advantages in terms of deployment, management, configuration updates and device security. We have tested this platform in real world scenarios. Evaluation results show that the platform can be efficiently deployed in traditional networks in an incremental manner. It also allows us to achieve similar user experience with security features embedded in the connectivity.
  • Bandyopadhyay, Payel (2015)
    The way the users interact with Information Retrieval (IR) systems is an interesting topic of interest in the field of Human Computer Interaction (HCI) and IR. With the ever increasing information in the web, users are often lost in the vast information space. Navigating in the complex information space to find the required information, is often an abstruse task by users. One of the reasons is the difficulty in designing systems that would present the user with an optimal set of navigation options to support varying information needs. As a solution to the navigation problem, in this thesis we propose a method referred as interaction portfolio theory, based on Markowitz's 'Modern Portfolio theory', a theory of finance. It provides the users with N optimal interaction options in each iteration, by taking into account user's goal expressed via interaction during the task, but also the risk related to a potentially suboptimal choice made by the user. In each iteration, the proposed method learns the relevant interaction options from user behaviour interactively and optimizes relevance and diversity to allow the user to accomplish the task in a shorter interaction sequence. This theory can be applied to any IR system to help users to retrieve the required information efficiently.
  • Schneider, Jenna (2017)
    Missing user needs and requirements often lead to sub-optimal software products that users find difficult to use. Software development approaches that follow the user-centered design paradigm try to overcome this problem by focusing on the needs and goals of end users throughout the process life cycle. The purpose of this thesis is to examine how three different user-centered design methodologies approach the development of user requirements and determine whether the ideal requirements development processes described in these methodologies are applicable to the practice of the software industry. The results of this investigation are finally used as a guideline for defining a high-level requirements development framework for the IT department of a large multinational company.
  • Wang, Ziran (2013)
    This thesis considers the problem of finding a process that, given a collection of news, can detect significant dates and breaking news related to different themes. The themes are unsupervisedly learned from some training corpora, and they mostly have intuitive meanings, like 'finance', 'disaster', 'wars' and so on. They are constructed only based on textual information provided in the corpora without any human intervention. To conduct this learning, the thesis use various types of component models, specifically Latent Dirichlet Allocation(LDA) and Correlated Topic Model(CTM). On top of that, to enrich the experiment, the Latent Semantic Indexing(LSA) and Multinomial Principal Component Analysis(MPCA) are also adopted for comparison. The learning produces every news coverage a relevance weight for given theme, which can be viewed as a theme distribution from statistical perspective. With the help of news time-stamp information, one can sum up and normalize these distributions from all news in day unit, and then draw the moving of accumulated relevance weights on a theme through time-line. It is natural to treat these curves as describing attention strength paid from media to different themes, and one can assume that behind every peak, there are striking events and associated news can be detected. This thesis is valuable in Media Studies research, and also can be further connected to stock or currency market for creating real value.
  • Islam, Mohammad Shafiqul (2013)
    RNA-sequencing is a high throughput sequencing technology that sequences cDNA to obtain information from a particular sample of RNA. RNA-seq has already been proven to be an important factor to research numerous incurable diseases like cancers. Like other high throughput technologies, it also produces a huge number of accurate but relatively short reads. So it remains a concerning issue for De Novo assembly tools to assemble this massive amount of short reads. Existing De Novo assembly tools to assemble RNA-seq data are developed mainly based on three different algorithms; Greedy, OLC, and Euler-path. Recent research has revealed that Euler-path approach works best for RNA-seq data. A few De Novo assembly tools are available nowadays that were developed using Euler-path approach. Most of these tools based on Euler-path can be used free of cost for non commercial purposes. However, the performance of these non-commercial usable tools varies under different criteria. Again, their performance has not been measured in all possible conditions. So, it is always a matter of concern, which tool should be used for a particular data. Main objective of this thesis work is to consider four Euler-path De Novo assembly tools available for non commercial use and find out their performance for EST data of eukaryote. Criteria of their performance would be assembly accuracy and integrity. ACM computing Classification system Applied computing Life and medical science-> Computational Biology-> Molecular Sequence Analysis
  • Michael, Martin Peter (Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2008)
    Mobile RFID services for the Internet of Things can be created by using RFID as an enabling technology in mobile devices. Humans, devices, and things are the content providers and users of these services. Mobile RFID services can be either provided on mobile devices as stand-alone services or combined with end-to-end systems. When different service solution scenarios are considered, there are more than one possible architectural solution in the network, mobile, and back-end server areas. Combining the solutions wisely by applying the software architecture and engineering principles, a combined solution can be formulated for certain application specific use cases. This thesis illustrates these ideas. It also shows how generally the solutions can be used in real world use case scenarios. A case study is used to add further evidence.
  • Harkonsalo, Olli-Pekka (2018)
    Tässä systemaattisesti tehdyssä kirjallisuuskatsauksessa selvitettiin, millainen on arkkitehtuurin kannalta merkittävien suunnittelupäätöksien tekemiseen käytetty päätöksentekoprosessi käytännössä, mitkä tekijät vaikuttavat suunnittelupäätöksien tekemiseen ja miten arkkitehtien rationaalista päätöksentekoprosessia voidaan tukea. Työssä selvisi, että arkkitehdit tekevät päätöksiään ainakin pääosin rationaalisti ja vaikuttivatkin hyötyvän tästä. Arkkitehdit eivät myöskään suosineet erilaisten systemaattisten päätöksenteko- tai dokumentointimenetelmien käyttöä. Arkkitehtien kokemustaso vaikutti päätöksentekoprosessiin siten, että vähemmän kokeneemmat arkkitehdit tekivät päätöksiään vähemmän rationaalisesti (ja oletettavasti myös vähemmän onnistuneesti) kuin kokeneemmat. Tärkeänä päätöksiin vaikuttavana tekijänä puolestaan nousi esiin arkkitehtien omat kokemukset ja uskomukset. Näiden ja erilaisten vaatimusten ja rajoitusten lisäksi päätöksentekoon vaikuttuvina tekijöinä nousivat esiin myös erilaiset kontekstiin liittyvät tekijät. Näistä nousi esiin myös se, kuka varsinaisesti tekee suunnittelupäätökset ja miten tämä tapahtuu. Kirjallisuuskatsauksessa selvisikin, että suurin osa suunnittelupäätöksistä tehdäänkin ryhmissä eikä vain yhden arkkitehdin toimesta. Ryhmäpäätöksenteko tapahtui useimmiten siten, että arkkitehti oli valmis tekemään lopullisen päätöksen, mutta oli kuitenkin myös valmis huomioimaan muiden mielipiteet. Ryhmäpäätöksentekoon liittyi sekä hyötyjä että haasteita. Työssä selvisi myös, että varsinkin vähemmän kokeneiden arkkitehtien rationaalista päätöksentekoprosessia voitiin tukea kokonaisvaltaisesti arkkitehtuurin kannalta merkittävien suunnittelupäätösten ja niiden järjellisten perustelujen tallentamiseen tarkoitettujen dokumentointimenetelmien käytön avulla. Näiden käytöstä voi spekuloida olevan hyötyä myös kokeneemmille arkkitehdeille, vaikkakin heidän voi tosin epäillä välttävän niiden käyttöä mm. niiden raskauden vuoksi. Toisaalta taas rationaalisempaa päätöksentekoprosessia pystyttiin tukemaan myös kannustamalla arkkitehtejä eri päättelytekniikoiden käytössä eri tavoin, mikä olisi dokumentointimenetelmien käyttöä kevyempi vaihtoehto, vaikkakin tässä tapauksessa luovuttaisiin kompromissina dokumentointimenetelmien käytön tuomista muista hyödyistä. ACM Computing Classification System (CCS): • Software and its engineering~Software architectures • Software and its engineering~Software design engineering