Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Tähkä, Sari (2013)
    This Master's thesis deals with the use of block copolymers in capillary electromigration techniques (literature part) and both in material chemistry and capillary electrophoresis (experimental part). Amphiphilic block copolymers are an interesting research topic due to their specific molecular structure, which consists of at least two parts with different chemical natures. The great potential of block copolymers arises from their tunability of size, shape and composition. In recent years, numerous copolymer architectures have been developed and the demand to find new materials for biomolecule separations remains high. The literature part introduces rarely used coating materials, block copolymers, in capillary electromigration techniques. The two main electromigration techniques where block copolymers have been tested are capillary electrophoresis and capillary gel electrophoresis. Block copolymers have been attached to capillary inner surface permanently and dynamically. In capillary gel electrophoresis the micellization ability of block copolymers has been already well-known for many decades, and specific studies of copolymer phases have been published. In the experimental part of this M.Sc. thesis, double-hydrophilic poly(N-methyl-2-vinylpyridinium iodide- block –ethylene oxide) diblock copolymer was used in two very different applications to emphasize the potential of block copolymers in various fields. In both studies, the hydrophilicity of ethylene oxide block and polycationic nature of vinylpyridinium block were utilized. First poly(N-methyl-2-vinylpyridinium iodide- block –ethylene oxide) was used to mediate the self-assembly of ferritin protein cages. The aim of this research was to explore complexation of double-hydrophilic diblock copolymers with protein cages and to study the molecular morphology of the formed nanoparticle/copolymer assemblies. Complexation process was studied in aqueous solvent medium and formation of complexes was investigated with dynamic light scattering. Transmission electron microscopy and small-angle x-ray scattering technique were used to characterize the size and shape of the particles. In the second approach the double-hydrophilic block copolymer was used as capillary coating material in two different capillary electromigration techniques. The possibility to alter the electro-osmotic flow and to gain a new tool for biomolecule studies was explored. Our results indicated that poly(N-methyl-2-vinylpyridinium iodide- block- ethylene oxide) binds efficiently with oppositely charged objects and surfaces via electrostatic interactions, and the polyethylene oxide block gives good stability in aqueous medium. Nanoparticle co-assembly studies showed that the poly(N-methyl-2-vinylpyridinium iodide- block- ethylene oxide) complexes were approximately 200-400 nm in diameter. For capillary coating studies, the polymer suppressed electro-osmotic flow efficiently and showed good run-to-run stability with RSD values from 1.4 to 7.9 %. Coating was observed to be very stable at pH range from 4.5 to 8.5 with ultra-low mobilities. The results achieved prove the potential of double-hydrophilic block copolymers in different various fields in the future.
  • Terkki, Eeva (2016)
    Free mobile applications (apps) available on app marketplaces are largely monetized through mobile advertising. The number of clicks received on the advertisements (ads) and thus the revenue gained from them can be increased by showing targeted ads to users. Mobile advertising networks collect a variety of privacy sensitive information about users and use it to build advertising profiles. To target ads at individual users based on their interests, these advertising profiles are typically linked with the users' unique device identifiers, such as the advertising ID used in Android. Advertising profiles may contain a large amount of privacy sensitive information about users, which can attract adversaries to attempt gaining access to this information. Mobile devices are known to leak privacy sensitive information such as device identifiers in clear text. This poses a potential privacy risk, since an adversary might exploit the leaked identifiers to learn privacy sensitive details about a victim by sampling personalized ads targeted at the victim. This thesis explores the behavior of mobile ad networks regarding data collection and ad targeting, as well as the possibility of an attack where leaked device identifiers are exploited to request ads targeted at a victim. We investigated these problems in the context of four popular Android ad libraries that support ad targeting, using a custom app and simulated user profiles designed for this purpose. Our findings indicate that it is possible to use sniffed identifiers to impersonate another user for requesting ads, and to some degree, this can result in receiving ads specific to the victim's profile. In the case of some ad networks, the lack of ad targeting causes it to be infeasible to conduct an attack to request ads targeted at the victim.
  • Gibson, Natalie (2023)
    The search for a profound connection between gravity and quantum mechanics has been a longstanding goal in theoretical physics. One such connection is known as the holographic principle, which suggests that the dynamics within a given region of spacetime can be fully described on its boundary surface. This concept led to the realization that string theory provides a lower-dimensional description that encapsulates essential aspects of spacetime. While the "AdS/CFT correspondence" exemplifies the success of this holographic theory, it was discovered soon after that the Universe has a positive cosmological constant, Λ. This immediately sparked interest in a potential correspondence centered around de Sitter (dS) space, which is also characterized by a positive cosmological constant. This thesis comprehensively explores the de Sitter/Conformal Field Theory (dS/CFT) correspondence from various perspectives, along with the unique challenges posed by the distinct nature of dS space. The original dS/CFT duality proposes that a two-dimensional Conformal Field Theory resides on the boundary of three-dimensional asymptotic dS space. However, the definition and interpretation of physical observables within the dS/CFT framework remain open questions. Therefore, the discussions in this thesis not only cover the original dS/CFT conjecture, but also extend into more recent advancements in the field. These advancements include a higher-spin dS/CFT duality, the relationship between string theory and dS space, and the intriguing proposal of an "elliptical" dS space. While the dS/CFT correspondence is still far from being well-defined, there have been extensive efforts devoted to shedding light on its intricate framework and exploring its potential applications. As the Universe may be evolving towards an approximately de Sitter phase, understanding the dS/CFT correspondence offers a unique opportunity for gaining fresh insights into the link between gravity and quantum field theory.
  • Sorkhei, Amin (2016)
    With the fast growing number of scientific papers produced every year, browsing through scientific literature can be a difficult task: formulating a precise query is not often possible if one is a novice in a given research field or different terms are often used to describe the same concept. To tackle some of these issues, we build a system based on topic models for browsing the arXiv repository. Through visualizing the relationship between keyphrases, documents and authors, the system allows the user to better explore the document search space compared to traditional systems based solely on query search. In this paper, we describe the design principles and the functionality supported by this system as well as report on a short user study.
  • Longi, Krista (2016)
    Researchers have long tried to identify factors that could explain why programming is easier for some than the others or that can be used to predict programming performance. The motivation behind most studies has been identifying students who are at risk to fail and improving passing rates on introductory courses as these have a direct impact on retention rates. Various potential factors have been identified, and these include factors related to students' background, programming behavior or psychological and cognitive characteristics. However, the results have been inconsistent. This thesis replicates some of these previous studies in a new context, and pairwise analyses of various factors and performance are performed. We have data collected from 3 different cohorts of an introductory Java programming course that contains a large number of exercises and where personal assistance is available. In addition, this thesis contributes to the topic by modeling the dependencies between several of these factors. This is done by learning a Bayesian network from the data. We will then evaluate these networks by trying to predict whether students will pass or fail the course. The focus is on factors related to students' background and psychological and cognitive characteristics. No clear predictors were identified in this study. We were able to find weak correlations between some of the factors and programming performance. However, in general, the correlations we found were smaller than in previous studies or nonexistent. In addition, finding just one optimal network that describes the domain is not straight-forward, and the classification rates obtained were poor. Thus, the results suggest that factors related to students' background and psychological and cognitive characteristics that were included in this study are not good predictors of programming performance in our context.
  • Luhtakanta, Anna (2019)
    Finding and exploring relevant information from a huge amount of available information is crucial in today’s world. The information need can be a specific and precise search or a broad exploratory search, or even something between the two. Therefore, an entity-based search engine could provide a solution for combining these two search goals. The focus in this study is to 1) study previous research articles on different approaches for entity-based information retrieval and 2) implement a system which tries to provide a solution for both information need and exploratory information search, regardless of whether the search was made by using basic free form query or query with multiple entities. It is essential to improve search engines to support different types of information need in the incessantly expanding information space.
  • Lindgren, Eveliina (2015)
    An experiment-driven approach to software product and service development is getting increasing attention as a way to channel limited resources to the efficient creation of customer value. In this approach, software functionalities are developed incrementally and validated in continuous experiments with stakeholders such as customers and users. The experiments provide factual feedback for guiding subsequent development. Although case studies on experimentation conventions in the industry exist, an understanding of the state of the practice is incomplete. Furthermore, the obstacles and success factors of continuous experimentation have been little discussed. To these ends, an interview-based qualitative survey was conducted, exploring the experimentation experiences of ten software development companies. The study found that although the principles of continuous experimentation resonated with industry practitioners, the state of the practice was not mature. In particular, experimentation was rarely systematic and continuous. Key challenges related to changing organizational culture, accelerating development cycle speed, measuring customer value and product success, and securing resources. Success factors included an empowered organizational culture and deep customer and domain knowledge. There was also a good availability of technical tools and competence to support experimentation.
  • Ahlfors, Dennis (2022)
    While the role of IT and computer science in the society is on the rise, interest in computer science education is also on the rise. Research covering study success and study paths is important for understanding both student needs and developing the educational programmes further. Using a data set covering student records from 2010 to 2020, this thesis aims to find key insights and base research in the topic of computer science study success and study paths in the University of Helsinki. Using novel visualizations and descriptive statistics this thesis builds a picture of the evolution of study paths and student success during a 10-year timeframe, providing much needed contextual information to be used as inspiration for future focused research into the phenomena discovered. The visualizations combined with statistical results show that certain student groups seem to have better study success and that there are differences in the study paths chosen by the student groups. It is also shown that the graduation rates from the Bachelor’s Programme in Computer Science are generally low, with some student groups showing higher than average graduation rates. Time from admission to graduation is longer than suggested and the sample study paths provided by the university are not generally followed, leading to the conclusion that the programme structure would need some assessment to better incorporate students with diverse academic backgrounds and differing personal study plans.
  • Barral, Oswald (2013)
    This thesis goes further in the study of implicit indicators used to infer interest in documents for information retrieval tasks. We study the behavior of two different categories of implicit indicators: fixation-derived features (number of fixations, average time of fixations, regression ratio, length of forward saccades), and physiology (pupil dilation, electrodermal activity). Based on the limited number of participants at our disposal we study how these measures react when addressing documents at three different reading rates. Most of the fixation-derived features are reported to differ significantly when reading at different speeds. Furthermore, the ability of pupil size and electrodermal activity to indicate perceived relevance is found intrinsically dependent on speed of reading. That is, when users read at comfortable reading speed, these measures are found to be able to correctly discriminate relevance judgments, but fail when increasing the addressed speed of reading. Therefore, the outcomes of this thesis strongly suggest to take into account reading speed when designing highly adaptive information retrieval systems.
  • Krause, Tina (2023)
    The use of fossil fuels is a significant contributor to greenhouse gas emissions, making the transition to zero-carbon energy systems and the improvements in energy efficiency important in climate change mitigation. The energy transition also puts the citizen in a more influential position and changes the traditional dynamics between energy producers and consumers when citizens produce their own energy. Furthermore, due to the opportunity and capacity of the demand side, the energy transition places a greater emphasis on energy planning at the local level, which requires new solutions and changes in the management of the energy system. One rising phenomenon is the potential of bottom-up developments like energy communities. Within the building sector, housing cooperatives have started to emerge as energy communities, offering a way to address the energy transition through bottom-up-driven energy actions that provide renewable energy and other benefits to the local community. This master thesis examines housing cooperatives' energy communities and their role in the energy transition. The research addresses the shared renovation project in Hepokulta, Turku, seen from the energy community perspective. Furthermore, the research highlights the importance of niche development in sustainable transition, acknowledging energy communities as socio-technical niches where development is partly embedded in renewable energy technology and partly in new practices. The concept of energy community is therefore analysed through the lens of Strategic Niche Management, which focuses on expectations, networks, and learning. This research aims to analyse how residents in Hepokulta perceive the energy community project through the niche processes and how the development of energy communities might affect urban development. Analysing the residents' perceptions provide insight into the energy community characteristics and the relationship between residents and the project development processes. Additionally, the analysis identifies matters that could be changed to improve the development. The thesis is a mixed methods research combining quantitative and qualitative data, which was collected through a survey sent to the eight housing cooperatives in Hepokulta. The research showed that residents perceive the shared project in Hepokulta as essential for the area's development. Moreover, many residents overlooked the social aspects of the development, highlighting the absence of the energy community perspective in the renovation. The findings suggest some weaknesses within the three niche processes, including the early involvement of residents and communication. Furthermore, although the residents perceived themselves as important actors and the literature emphasised the importance of the demand side in future energy systems, the research revealed that the connection between project development and the residents is still lacking. However, the analysis indicates that introducing additional actors could help the energy community develop. External assistance could, for instance, benefit the housing cooperatives by facilitating improvements in the decision-making processes, the network between actors, and the sharing of information and skills.
  • Wachirapong, Fahsinee (2023)
    The importance of topic modeling in the analysis of extensive textual data is magnified by the inefficiency of manual work due to its time-consuming nature. Data preprocessing is a critical step before feeding text data to analysis. This process ensures that irrelevant information is removed and the remaining text is suitably formatted for topic modeling. However, the absence of standard rules often leads practitioners to adopt undisclosed or poorly understood preprocessing strategies. This potentially impacts the reproducibility and comparability of research findings. This thesis examines text preprocessing, including lowercase conversion, non-alphabetic removal, stopword elimination, stemming, and lemmatization, and explores their influence on data quality, vocabulary size, and topic interpretation generated by the topic model. Additionally, the variations in text preprocessing sequences and their impact on the topic model's outcomes. Our examination spans 120 diverse preprocessing approaches on the Manifesto Project Dataset. The results underscore the substantial impact of preprocessing strategies on perplexity scores and prove the challenges in determining the optimal number of topics and interpreting final results. Importantly, our study raises awareness of data preprocessing in shaping the perceived themes and content in identified topics and proposes recommendations for researchers to consider before performing data preprocessing.
  • Heinonen, Ava (2020)
    The design of instructional material affects learning from it. Abstraction, or limiting details and presenting difficult concepts by linking them with familiar objects, can limit the burden to the working memory and make learning easier. The presence of visualizations and the level to which students can interact with them and modify them also referred to as engagement, can promote information processing. This thesis presents the results of a study using a 2x3 experimental design with abstraction level (high abstraction, low abstraction) and engagement level (no viewing, viewing, presenting) as the factors. The study consisted of two experiments with different topics: hash tables and multidimensional arrays. We analyzed the effect of these factors on instructional efficiency and learning gain, accounting for prior knowledge, and prior cognitive load. We observed that high abstraction conditions limited study cognitive load for all participants, but were particularly beneficial for participants with some prior knowledge on the topic they studied. We also observed that higher engagement levels benefit participants with no prior knowledge on the topic they studied, but not necessarily participants with some prior knowledge. Low cognitive load in the pre-test phase makes studying easier regardless of the instructional material, as does knowledge on the topic being studied. Our results indicate that the abstractions and engagement with learning materials need to be designed with the students and their knowledge levels in mind. However, further research is needed to assess the components in different abstraction levels that affect learning outcomes and why and how cognitive load in the pre-test phase affects cognitive load throughout studying and testing.
  • Korhonen, Keijo (2022)
    The variational quantum eigensolver (VQE) is one of the most promising proposals for a hybrid quantum-classical algorithm made to take advantage of near-term quantum computers. With the VQE it is possible to find ground state properties of various of molecules, a task which many classical algorithms have been developed for, but either become too inaccurate or too resource-intensive especially for so called strongly correlated problems. The advantage of the VQE comes in the ability of a quantum computer to represent a complex system with fewer so-called qubits than a classical computer would with bits, thus making the simulation of large molecules possible. One of the major bottlenecks for the VQE to become viable for simulating large molecules however, is the scaling of the number of measurements necessary to estimate expectation values of operators. Numerous solutions have been proposed including the use of adaptive informationally complete positive operator-valued measures (IC-POVMs) by García-Pérez et al. (2021). Adaptive IC-POVMs have shown to improve the precision of estimations of expectation values on quantum computers with better scaling in the number of measurements compared to existing methods. The use of these adaptive IC-POVMs in a VQE allows for more precise energy estimations and additional expectation value estimations of separate operators without any further overhead on the quantum computer. We show that this approach improves upon existing measurement schemes and adds a layer of flexibility, as IC-POVMs represent a form of generalized measurements. In addition to a naive implementation of using IC-POVMs as part of the energy estimations in the VQE, we propose techniques to reduce the number of measurements by adapting the number of measurements necessary for a given energy estimation or through the estimation of the operator variance for a Hamiltonian. We present results for simulations using the former technique, showing that we are able to reduce the number of measurements while retaining the improvement in the measurement precision obtained from IC-POVMs.
  • Silvennoinen, Meeri (2022)
    Malaria is a major cause of human mortality, morbidity, and economic loss. P. falciparum is one of six Plasmodium species that cause malaria and is widespread in sub-Saharan Africa. Many of the currently used drugs for malaria have become less effective, have adverse effects, and are highly expensive, so new ones are needed. mPPases are membrane integral pyrophosphatases that are found in the vacuolar membranes of protozoa but not in humans. These enzymes pump sodium ions and/or protons across the membrane and are crucial for parasite survival and proliferation. This makes them promising targets for new drug development. In this study we aimed to identify and characterize transient pockets in mPPases that could offer suitable ligand binding sites. P. falciparum was chosen because of its therapeutical interest, and T. maritima and V. radiata were chosen because they are test systems in compound discovery. The research was performed using molecular modelling techniques, mainly homology modelling, molecular dynamics, and docking. mPPases from three species were used to make five different systems: P. falciparum (apo closed conformation), T. maritima (apo open, open with ligand, and apo closed) and V. radiata (open with ligand). P. falciparum mPPase does not have a 3D structure available, so a homology model was built using the closest structure available from V. radiata mPPase as a template. Runs of 100 ns molecular dynamics simulations were conducted for these five systems: monomeric mPPase from P. falciparum and dimeric mPPases for the others. Two representative 3D structures for each of the five trajectories, the most dissimilar one to another, were selected for further analysis using clustering. The scrutinized 3D structures were first analyzed to identify possible binding pockets using two independent methods, SiteMap and blind docking (where no pre-determined cavity is set for docking). A second set of experiments using different scores (druggability, enclosure, exposure, …) and targeted docking were then run to characterize all the located pockets. As a result, only half of the catalytic pockets were identified. None of the transient pockets were identified in P. falciparum mPPase and all of them were located within the membrane. Docking was performed using compounds that have shown inhibiting behavior in previous studies but did not give good results in the tested structures. In the end none of the transient pockets were interesting for further study.
  • Koskinen, Anssi (2020)
    The applied mathematical field of inverse problems studies how to recover unknown function from a set of possibly incomplete and noisy observations. One example of real-life inverse problem is image destriping, which is the process of removing stripes from images. The stripe noise is a very common phenomenon in various of fields such as satellite remote sensing or in dental x-ray imaging. In this thesis we study methods to remove the stripe noise from dental x-ray images. The stripes in the images are consequence of the geometry of our measurement and the sensor. In the x-ray imaging, the x-rays are sent on certain intensity through the measurable object and then the remaining intensity is measured using the x-ray detector. The detectors used in this thesis convert the remaining x-rays directly into electrical signals, which are then measured and finally processed into an image. We notice that the gained values behave according to an exponential model and use this knowledge to transform this into a nonlinear fitting problem. We study two linearization methods and three iterative methods. We examine the performance of the correction algorithms with both simulated and real stripe images. The results of the experiments show that although some of the fitting methods give better results in the least squares sense, the exponential prior leaves some visible line artefacts. This suggests that the methods can be further improved by applying suitable regularization method. We believe that this study is a good baseline for a better correction method.
  • Merikoski, Jori (2016)
    We study growth estimates for the Riemann zeta function on the critical strip and their implications to the distribution of prime numbers. In particular, we use the growth estimates to prove the Hoheisel-Ingham Theorem, which gives an upper bound for the difference between consecutive prime numbers. We also investigate the distribution of prime pairs, in connection which we offer original ideas. The Riemann zeta function is defined as ζ(s) := \sum_{n =1}^{∞} n^{-s} in the half-plane Re s > 1. We extend it to a meromorphic function on the whole plane with a simple pole at s=1, and show that it satisfies the functional equation. We discuss two methods, van der Corput's and Vinogradov's, to give upper bounds for the growth of the zeta function on the critical strip 0 ≤ Re s ≤ 1. Both of these are based on the observation that ζ(s) is well approximated on the critical strip by a finite exponential sum \sum_{n =1}^{T} n^{-s} = \sum_{n =1}^{T} exp\{ -s log n \}. Van der Corput's method uses the Poisson summation formula to transform this sum into a sum of integrals, which can be easily estimated. This yields the estimate ζ(1/2 + it) = \mathcal{O} (t^{\frac{1}{6}} log t), as t → ∞. Vinogradov's method transforms the problem of estimating an exponential sum into a combinatorial problem. It is needed to give a strong bound for the growth of the zeta function near the vertical line Re s = 1. We use complex analysis to prove the Hoheisel-Ingham Theorem, which states that if ζ(1/2 + it) = \mathcal{O} (t^{c}) for some constant c > 0, then for any θ > \frac{1+4c}{2+4c}, and for any function x^{θ} << h(x) << x, we have ψ (x+h) - ψ (x) ∼ h, as x → ∞. The proof of this relies heavily on the growth estimate obtained by the Vinogradov's method. Here ψ(x) := \sum_{n ≤ x} Λ (n) = \sum_{p^k ≤ x} log p is the summatory function of the von Mangoldt's function. From this we obtain by using van der Corput's estimate that the difference between consecutive primes satisfies p_{n+1} - p_{n} < p_{n}^{\frac{5}{8} + \epsilon} for all large enough n, and for any \epsilon > 0. Finally, we study prime pairs, and the Hardy-Littlewood Conjecture on their distribution. More precisely, let π _{2k}(x) stand for the number of prime numbers p ≤ x such that p+2k is also a prime. The following ideas are all original contributions of this thesis: We show that the average of π _{2k}(x) over 2k ≤ x^{θ} is exactly what is expected by the Hardy-Littlewood Conjecture. Here we can choose θ > \frac{1+4c}{2+4c} as above. We also give a lower bound of π _{2k}(x) for the averages over much smaller intervals 2k ≤ E log x, and give interpretations of our results using the concept of equidistribution. In addition, we study prime pairs by using the discrete Fourier transform. We express the function π _{2k}(n) as an exponential sum, and extract from this sum the term predicted by the Hardy-Littlewood Conjecture. This is interpreted as a discrete analog of the method of major and minor arcs, which is often used to tackle problems of additive number theory.
  • Kaipio, Mikko Ari Ilmari (2014)
    This master's thesis consists of two parts related to atomic layer deposition (ALD) processes: a literature survey of so-called ex situ in vacuo analysis methods used in investigations of the ALD chemistry and a summary of the work performed by the author using in situ methods. The first part of the thesis is divided into four sections. In the first two sections ALD as a thin film deposition method is introduced, and in situ and ex situ in vacuo publications related to ALD are summarized. The third section is a general overview of ex situ in vacuo analysis methods, and the final section a literature review covering publications where ex situ in vacuo techniques have been employed in studying ALD processes, with a strong emphasis on analysis methods which are based on the use of x-rays. The second part of the thesis consists of in situ quartz crystal microbalance and quadrupole mass spectrometry studies of the V(NEtMe)4/D2O, V(NEtMe)4/O3, Mg(thd)2/TiF4 and Cu2(CH3COO)4/D2O ALD processes. The experimental apparatus and related theory are given a brief overview, followed by a presentation and discussion of the results.
  • Rissanen, Olli (2014)
    Delivering more value to the customer is the goal of every software company. In modern software business, delivering value in real-time requires a company to utilize real-time deployment of software, data-driven decisions and empirical evaluation of new products and features. These practices shorten the feedback loop and allow for faster reaction times, ensuring the development is focused on features providing real value. This thesis investigates practices known as continuous delivery and continuous experimentation as means of providing value for the customers in real-time. Continuous delivery is a development practice where the software functionality is deployed continuously to customer environment. This process includes automated builds, automated testing and automated deployment. Continuous experimentation is a development practice where the entire R&D process is guided by conducting experiments and collecting feedback. As a part of this thesis, a case study is conducted in a medium-sized software company. The research objective is to analyze the challenges, benefits and organizational aspects of continuous delivery and continuous experimentation in the B2B domain. The data is collected from interviews conducted on members of two teams developing two different software products. The results suggest that technical challenges are only one part of the challenges a company encounters in this transition. For continuous delivery, the company must also address challenges related to the customer and procedures. The core challenges are caused by having multiple customers with diverse environments and unique properties, whose business depends on the software product. Some customers also require to perform manual acceptance testing, which slows down production deployments. For continuous experimentation, the company also has to address challenges related to the customer and organizational culture. An experiment which reveals value for a single customer might not reveal as much value for other customers due to unique properties in each customers business. Additionally, the speed by which experiments can be conducted is relative to the speed by which production deployments can be made. The benefits found from these practices support the case company in solving many of its business problems. The company can expose the software functionality to the customers from an earlier stage, and guide the product development by utilizing feedback and data instead of opinions.
  • Koutsompinas, Ioannis Jr (2021)
    In this thesis we study extension results related to compact bilinear operators in the setting of interpolation theory and more specifically the complex interpolation method, as introduced by Calderón. We say that: 1. the bilinear operator T is compact if it maps bounded sets to sets of compact closure. 2.\bar{ A} = (A_0,A_1) is a Banach couple if A_0,A_1 are Banach spaces that are continuously embedded in the same Hausdorff topological vector space. Moreover, if (Ω,\mathcal{A}, μ) is a σ-finite measure space, we say that: 3. E is a Banach function space if E is a Banach space of scalar-valued functions defined on Ω that are finite μ-a.e. and so that the norm of E is related to the measure μ in an appropriate way. 4. the Banach function space E has absolutely continuous norm if for any function f ∈ E and for any sequence (Γ_n)_{n=1}^{+∞}⊂ \mathcal{A} satisfying χ_{Γn} → 0 μ-a.e. we have that ∥f · χ_{Γ_n}∥_E → 0. Assume that \bar{A} and \bar{B} are Banach couples, \bar{E} is a couple of Banach function spaces on Ω, θ ∈ (0, 1) and E_0 has absolutely continuous norm. If the bilinear operator T : (A_0 ∩ A_1) × (B_0 ∩ B_1) → E_0 ∩ E_1 satisfies a certain boundedness assumption and T : \tilde{A_0} × \tilde{B_0} → E_0 compactly, we show that T may be uniquely extended to a compact bilinear operator T : [A_0,A_1]_θ × [B_0,B_1]_θ → [E_0,E_1]_θ where \tilde{A_j} denotes the closure of A_0 ∩ A_1 in A_j and [A_0,A_1]_θ denotes the complex interpolation space generated by \bar{A}. The proof of this result comes after we study the case where the couple of Banach function spaces is replaced by a single Banach space.
  • Vazquez Muiños, Henrique (2016)
    In this thesis we consider an extension of the Standard Model (SM) with a SU(2) symmetric Dark Sector, and study its viability as a dark matter (DM) model. In the dark sector, a hidden Higgs mechanism generates three massive gauge bosons, which are the DM candidates of the model. We allow a small coupling between the SM Higgs and the scalar of the dark sector, such that there is a scalar mixing. We study the new interactions in the model and analyse the consequences of the scalar mixing: new possible decays of the Higgs into DM, Higgs decay rates and production cross sections different from SM predictions, and possible interactions between DM and normal matter. We study the evolution of the DM abundance from the early universe to the present and compare the relic densities that the model yields with the experimental value measured by the Planck satellite. We compute the decay rates for the Higgs in the model and test if they are consistent with the experimental data from Atlas, CMS and Tevatron. We calculate the cross section for the interaction between DM and normal matter and compare it with the data from the latest direct detection experiments LUX and XENON100. We discuss the impact of the experimental constraints on the parameter space of the model, and find the regions that give the best fit to the experimental data. In this work we show that the agreement with the experiments is optimal when both the DM candidates and the dark scalar are heavier than the Higgs boson.