Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Enwald, Joel (2020)
    Mammography is used as an early detection system for breast cancer, which is one of the most common types of cancer, regardless of one’s sex. Mammography uses specialised X-ray machines to look into the breast tissue for possible tumours. Due to the machine’s set-up as well as to reduce the radiation patients are exposed to, the number of X-ray measurements collected is very restricted. Reconstructing the tissue from this limited information is referred to as limited angle tomography. This is a complex mathematical problem and ordinarily leads to poor reconstruction results. The aim of this work is to investigate how well a neural network whose structure utilizes pre-existing models and known geometry of the problem performs at this task. In this preliminary work, we demonstrate the results on simulated two-dimensional phantoms and discuss the extension of the results to 3-dimensional patient data.
  • Fatemeh, Ajallooeian (2018)
    Pollen samples from Lake Lavijärvi (sediment core LAV16-05) located in western Karelian Russia were examined. 21 pollen and spore types were identified in the process to reconstruct the past ~3000 years vegetation cover and consequently understand major climate pattern of the area. The pollen diagram was divided into 4 zones determined by the main vegetation changes: Zone A (2700 to 1400 cal BP or 750 BC to 550 AD) representing a consistent arboreal forest; Zone B (1400 to 650 cal BP or 550 to 1300 AD) demonstrating a transition from forest to forest-steppe vegetation; Zone C (650 to 10 cal BP or 1300 to 1940 AD) illustrating fluctuations of vegetation patterns; and finally, Zone D (10 to -66 BP or 1940 to 2016 AD) showing the recent post-war relaxation of land-use. Pinus, Picea, Betula, Alnus, Chenopodiaceae and Poaceae are among the major pollen types. Throughout the core changes in vegetation patterns and slash and burn cultivation are well represented. The Medieval Warm Period and the Little Ice Age are also moderately present in the pollen frequency and variety. The anthropogenic effects of farming are displayed by large abundances of Poaceae and Cerealia pollen especially in Zone C, eutrophication of the lake and the absence of Picea pollen due to fires. Today, the lake’s surrounding is mainly pasture with arable farming taking place moderately. The climate of Lavijärvi appeared to have had long winters with excessive snow cover especially in the early stages (2600 to 1000 cal BP or 650 BC to 950 AD) and a moderately dry temperature due to Chenopodiaceae growth though maintaining enough soil moisture for cultivated plants. Other geochemical indicators such as TIC, TN and C/N of core LAV 16-05 were measured. The geochemical findings represent a silt loam sediment profile for the core along with an organic rather than inorganic carbon available together with steady yet low levels of TN and TS. Lake Lavijärvi is a good example of shifting from dense arboreal forest to steppe-like vegetation and finally pasture throughout a window of 3000 years and can reveal useful information on the land-use history of the area.
  • Pudas, Topi (2024)
    This thesis contributes to the ongoing development of a novel, environmentally friendly e-waste recycling technology. We utilize high-intensity focused ultrasound to locally extract gold from the surface of printed circuit boards via cavitation erosion. Acoustic cavitation erosion is the phenomenon in which the acoustically driven violent collapse of gas bubbles in liquid cause damage to nearby solids. Bubble collapse is preceded by its dramatic growth, which is driven by the rarefactive phase of the acoustic wave. In this work, I investigate the effect of ultrasound frequency on the efficiency of gold extraction. Gold extraction experiments were conducted with three custom-built transducers, with different resonant frequencies [4.2, 7.3, 11.8] MHz. The geometries of the transducers are identical, as were the electrical driving parameters. With each transducer, a sequence of gold extraction experiments was conducted with an increasing number of acoustic bursts (ranging from 100k to 1.9M). The results demonstrate that the lowest frequency (4.2 MHz) is 3.8 and 4.5 times more efficient at extracting gold compared to [7.3, 11.8] MHz, respectively. This dramatic improvement is likely due to larger cavitation bubbles associated with lower frequencies. Larger bubbles in the cavitating zone would be expected to undergo more bubble coalescence due to a higher gas volume ratio. Since the energy of bubble collapse increases with bubble size, increased bubble coalescence should augment the energy of bubble collapse. These results provide valuable insights relating to cavitation research and will guide the ongoing development of our novel e-waste recycling technology.
  • Ulmala, Minna (Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2012)
    eBusiness collaboration and an eBusiness process are introduced as a context of a long running eBusiness transaction. The nature of the eBusiness collaboration sets requirements for the long running transactions. The ACID properties of the classical database transaction must be relaxed for the eBusiness transaction. Many techniques have been developed to take care of the execution of the long running business transactions such as the classical Saga and a business transaction model (BTM) of the business transaction framework. Those classic techniques cannot adequately take into account the recovery needs of the long running eBusiness transactions and they need to be further improved and developed. The expectations for a new service composition and recovery model are defined and described. The DeltaGrid service composition and recovery model (DGM) and the Constraint rules-based recovery mechanism (CM) are introduced as examples of the new model. The classic models and the new models are compared to each other and it is analysed how the models answer to the expectations. Neither new model uses the unaccustomed classification of atomicity even if the BTM includes the unaccustomed classifying of atomicity. A recovery model of the new models has improved the ability to take into account the data and control dependencies in the backward recovery. The new models present two different kinds of strategies to recover a failed service. The strategy of the CM increases the flexibility and the efficiency compared to the Saga or the BTF. The DGM defines characteristics that the CM does not have: a Delta-Enabled rollback, mechanisms for a pre-commit recoverability and for a post-commit recoverability and extends the concepts of a shallow compensation and a deep compensation. The use of them guarantees that an eBusiness process recovers always in a consistent state which is something the Saga, the BTM and the CM could not proof. The DGM offers also the algorithms of the important mechanisms. ACM Computing Classification System (CCS): C.2.4 [Distributed Systems]: Distributed applications
  • Sysikaski, Mikko (2019)
    The thesis discusses algorithms for the minimum link path problem, which is a well known geometric path finding problem. The goal is to find a path that does the minimum number of turns amidst obstacles in a continuous space. We focus on the most classical variant, the rectilinear minimum link path problem, where the path and the obstacles are restricted to the directions of the coordinate axes. We study the rectilinear minimum link path problem in the plane and in the three-dimensional space, as well as in higher dimensional domains. We present several new algorithms for solving the problem in domains of varying dimension. For the planar case we develop a simple method that has the optimal O(n log n) time complexity. For three-dimensional domains we present a new algorithm with running time O(n^2 log^2 n), which is an improvement over the best previously known result O(n^2.5 log n). The algorithm can also be generalized to higher dimensions, leading to an O(n^(D-1) log^(D-1) n) time algorithm in D-dimensional domains. We describe the new algorithms as well as the data structures used. The algorithms work by maintaining a reachable region that is gradually expanded to form a shortest path map from the starting point. The algorithms rely on several efficient data structures: the reachable region is tracked by using a simple recursive space decomposition, and the region is expanded by a sweep plane method that uses a multidimensional segment tree.
  • Mahó, Sándor István (2021)
    This thesis analyses the alterations of vertically integrated atmospheric meridional energy transport due to polar amplification on an aqua planet. We analyse the energy transport of sensible heat, latent energy, potential energy and kinetic energy. We also cover the energy flux of the mean meridional circulation, transient eddies and stationary eddies. In addition, we also address the response of the zonal mean air temperature, zonal mean zonal wind, zonal mean meridional wind, zonal mean stream function and zonal mean specific humidity. Numerical model experiments were carried out with OpenIFS in its aqua planet configuration. A control (CTRL) and a polar amplification (PA) simulation was set up forced by different SST (sea surface temperature) patterns. We detected tropospheric warming and atmospheric specific humidity increase 15-90° N/S and reduction of the meridional temperature gradient throughout the troposphere. We also found reduced strength of the subtropical jet stream and slowdown of the mean meridional circulation. Important changes were identified in the Hadley cell: the rising branch shifted poleward and caused reduced lifting in equatorial areas. Regarding the total atmospheric vertically integrated meridional energy transport, we found reduction in case of the mean meridional circulation and transient eddies in all latitudes. The largest reduction was shown by the Hadley cell transport (-15%) and by midlatitude transient eddy flux (-23%). Unlike most studies, we did not observe that meridional latent energy transport increases by polar amplification. Therefore, it is stated that the increased moisture content of the atmosphere does not imply increased meridional latent energy transport, and hence there is no compensation for the decrease of meridional dry static energy transport. Lastly, we did not detect stationary eddies in our simulations which is caused by the simplified surface boundary (i.e. the water-covered Earth surface). The main finding of this thesis is that polar amplification causes decreasing poleward energy transport on an aqua planet.
  • Valto, Kristian (2023)
    Microservices have been a popular architectural style to build server-side applications for quite a while. It has gained popularity for its inherent properties that countered the downsides of matured monoliths that are harder to maintain and further develop the larger the monoliths get. A monolithic application consists of a single unit. It usually is split into application tiers such as client, database, and server-side applications. The properties countering monoliths come from splitting a service into smaller services. These smaller services then form the server-side application by communicating with each other. The goal of a single microservice is to focus on "doing one thing well" and only that. Together they form a loosely coupled group of services to achieve larger business goals. However, the fact is that distributed systems are complex. With software architecture we can separate the complexity of distributed systems and business functions.
  • Autio, Antti (2020)
    Hiukkasfysiikan standardimalli kuvaa alkeishiukkasia ja niiden välisiä vuorovaikutuksia. Higgsin bosonin löydön (2012) jälkeen kaikki standardimallin ennustamat hiukkaset on havaittu. Standardimalli on hyvin tarkka teoria, mutta kaikkia havaittuja asioita ei voida kuitenkaan selittää standardimallin puitteissa. Supersymmetria on yksi houkutteleva tapa laajentaa standardimallia. Matalan energian supersymmetriaa ei kuitenkaan ole havaittu. Supersymmetria vaatii toimiakseen niin sanotun kahden Higgsin dubletin mallin. Tavallisessa standardimallissa on yksi Higgsin dublettikenttä. Higgsin dubletissa on kaksi kompleksista kenttää eli yhteensä neljä vapausastetta, joten voisi olettaa, että siitä syntyy neljä hiukkasta. Kolme vapausasteista kuitenkin sitoutuu välibosoneihin W+, W− ja Z, jolloin jäljelle jää yksi Higgsin bosoni. Kahden Higgsin dubletin malleissa dublettikenttiä on kaksi. Koska se lisää teoriaan yhden neljän vapausasteen dubletin, Higgsin hiukkasia on siinä kaiken kaikkiaan viisi: kolme sähköisesti neutraalia (h, H ja A) sekä kaksi sähköisesti varattua (H+ ja H−). Tässä työssä keskitytään varattujen Higgsin hiukkasten etsintään malliriippumattomasti. Tutkimuksessa käytetään LHC-kiihdyttimen (Large Hadron Collider, suuri hadronitörmäytin) CMS-ilmaisimen (Compact Muon Solenoid, kompakti myonisolenoidi) keräämää dataa. Sähkövarauksellisten Higgsin bosonien etsintä keskittyy lopputiloihin, joissa varattu Higgsin bosoni hajoaa hadroniseksi tau-leptoniksi (eli tau-leptoniksi, joka puolestaan hajoaa hadroneiksi) sekä taun neutriinoksi. Niin sanottu liipaisu on tapa suodattaa dataa tallennusvaiheessa, sillä dataa tulee törmäyksistä niin paljon, ettei kaiken tallentaminen ole mahdollista. Eri liipaisimet hyväksyvät törmäystapauksia eri kriteerien perusteella. Liipaisusta aiheutuu merkittäviä systemaattisia epävarmuuksia. Tässä työssä liipaisun epävarmuuksia pyritään pienentämään käyttämällä sellaisia liipaisimia, joiden epävarmuudet ovat pienempiä. Tätä varten analyysi on jaettava riippumattomiin osiin, joiden epävarmuudet käsitellään erikseen. Lopuksi osat yhdistetään tilastollisesti toisiinsa, jolloin kokonaisepävarmuuden oletetaan pienenevän. Tässä työssä tutkitaan, pieneneekö tämä epävarmuus ja kuinka paljon. Näitä menetelmiä käyttäen kykenimme löytämään pieniä parannuksia analyysin tarkkuuteen raskaiden varattujen Higgsin bosonien kohdalla. Lisäksi odotettu raja, jota suurempi varatun Higgsin hiukkasen tuotto tässä lopputilassa olisi havaittavissa, paranee yllättävästi. Tätä rajan paranemista tutkitaan liipaisua emuloimalla. Työ on tarkoitus sisällyttää koko Run2:n datasta julkaistaviin tuloksiin.
  • Anttila, Kamilla (2020)
    Most machine learning projects consist of four distinct phases: data preparation, model training, model validation, and inference serving. Even though all of these phases are vital components of a successful machine learning project, the focus of most machine learning work is solely on the training of models. The other phases often need to be implemented with ad-hoc solutions, which can easily lead to technical debt. Technical debt is a metaphor for describing the quality of a software project. It describes the state of a project by comparing it to a financial loan. During software development, a loan can be taken to add value to the present state of the system. However, the loan comes with interest and has to be payed back. A loan can be taken, for example, by writing low quality code to meet a deadline. The loan has to be payed back by rewriting the code later, or else it will start to grow interest. The interest can be seen in the code functioning poorly or requiring substantial amounts of time to be understood. If a loan is not payed back, the interest keeps increasing, making it more and more difficult to pay the loan back later. In this thesis, we study the effect machine learning frameworks have on technical debt. We describe the machine learning project lifecycle and the various sources of technical debt associated with it. We review available machine learning frameworks and their mitigation strategies for the technical debt in machine learning projects. Our insights demonstrate how frameworks can be used to reduce the overall technical debt in machine learning projects.
  • Mäenpää, Hanna (2013)
    The change from pre-descriptive 'waterfall' software process into iterative and incremental models has created a need for redefinition of software requirements engineering. Agile methodologies have emerged to support the paradigm shift by treating the symptoms: emphasizing change management and customer collaboration to embrace volatility of requirements and priorities whilst in development. However, it has been recognized that fast-paced agile development does not provide sufficient support for initial or long-term planning of the software product. Research and practitioner literature have started to address the need with the concept of a high-level definition of the software project's outcome: the software Product Vision. In this thesis, uncertainty in new product development is studied from the perspective of Innovation Management. As a vehicle for reducing uncertainty in software projects, the concept of an software Product Vision (reason for the project's existence) is chosen to be examined from the viewpoints of New Product Development and Software Engineering literature. The work describes sources of uncertainty in software projects and explains the effects of a mutually understood software Product Vision on software project performance and end-product acceptance. Key parameters for an interdisciplinary and unified software Product Vision are identified by studying four existing and one emergent Product Vision models. Finally, a new Product Vision framework (InnCa) is created based on semantic analysis. The framework's applicability on software projects in evaluated in three participatory action research -case studies. As a result, it is concluded that common parameters of a interdisciplinary 'Product Vision' can be identified. The framework created can be used to ideate, rapidly capture, iterate and analyze vague software ideas. It is applicable for sharing knowledge about the project's high-level goals amongst the project's stakeholders. However, it is not argued in this thesis that the framework could be used in all kinds of projects and circumstances. While uncertainty in software projects is a chaotic and complex phenomenon, no 'silver bullet' can address all situations.The topic of software Product Vision may prove grounds for further research, possibly leading to practical tools for assessing and quantifying uncertainty about goals during a software project's trajectory.
  • Brandtberg, Ronnie (2020)
    Re-engineering can be described as a process for updating an existing system in order to meet new requirements. Restructuring and refactoring are activities that can be performed as a part of the re-engineering process. Supporting new requirements like migrating to new frameworks, new environments and architectural styles is essential for preservation of quality attributes like maintainability and evolvability. Many larger legacy systems slowly deteriorate over time in quality and adding new functionality becomes increasingly difficult and costly as technical debt accumulates. To modernize a legacy system and improve the cost effectiveness of implementing new features a re-engineering process is often needed. The alternative is to develop a completely new system but this can often lead to loss of years of accumulated functionality and be too expensive. Re-engineering strategies can be specialized and solve specific needs like cloud migration or be more generic in nature supporting several kinds of needs. Different approaches are suitable for different kinds of source and target systems. The choice of a re-engineering strategy is also influenced by organisational and business factors. The re-engineering of a highly tailored legacy system in a small organisation is different from re-engineering a scalable system in a large organisation. Generic and flexible solutions are well suited for especially smaller organisations with complex systems. The re-engineering strategy Renaissance was applied in a case study at Roima Intelligence Oy in order to find out if such a strategy is realistically usable, useful and valuable for a smaller organization. The results show that a re-engineering strategy is possible to be used with low overhead in order to prioritize different parts of the system and determining a suitable modernization plan. Renaissance was also shown to add value especially in the form of deeper understanding of the system and a structured way to evaluate different options for modernization. This is achieved through assessing the system from different views taking into account especially business and technical aspects. A lesson learned about Renaissance is that determining an optimal scope for the system assessment is challenging. The results are applicable for other organisations dealing with complex legacy systems with constrained resources. Limitations of the study are that the number of different kinds of re-engineering strategies discussed is small and more suitable strategies than Renaissance could be discovered with a systematic mapping study. The amount of experts participating in the process itself as well as the evaluation was also low, introducing some uncertainty to the validity of the results. Further research is needed in order to determine how specialized and generic re-engineering strategies compare in terms of needed resources and added value.
  • Ihalainen, Hannes (2022)
    The so-called declarative approach has proven to be a viable paradigm for solving various real-world NP-hard optimization problems in practice. In the declarative approach, the problem at hand is encoded using a mathematical constraint language, and an algorithm for the specific language is employed to obtain optimal solutions to an instance of the problem. One of the most viable declarative optimization paradigms of the last years is maximum satisfiability (MaxSAT) with propositional logic as the constraint language. So-called core-guided MaxSAT algorithms are arguably one of the most effective MaxSAT-solving paradigms in practice today. Core-guided algorithms iteratively detect and rule out (relax) sources of inconsistencies (so-called unsatisfiable cores) in the instance being solved. Especially effective are recent algorithmic variants of the core-guided approach which employ so-called soft cardinality constraints for ruling out inconsistencies. In this thesis, we present a structure-sharing technique for the cardinality-based core relaxation steps performed by core-guided MaxSAT solvers. The technique aims at reducing the inherent growth in the size of the propositional formula resulting from the core relaxation steps. Additionally, it enables more efficient reasoning over the relationships between different cores. We empirically evaluate the proposed technique on two different core-guided algorithms and provide open-source implementations of our solvers employing the technique. Our results show that the proposed structure-sharing can improve the performance of the algorithms both in theory and in practice.
  • Andersson, Elina (2013)
    Critical cartographic research has proved that maps are connected to national geopolitics and that they reflect local, national interests and agendas. The map designer and his background can affect the contents and design of a map to a great extent. Research in critical cartography has so far mostly concentrated on traditional paper maps, but nowadays maps are to a growing extent read on the Web as the World Wide Web has gained success. Because of the extending power that web maps have, it is of great importance to investigate if and how web maps are connected to geopolitical agendas and interests, and how they picture the world. This research investigates three web map services that are free-of-charge and theoretically open for anyone at any time. The map services originate in different parts of the world: ABmaps is Israeli, Google Maps is American and Yandex Maps is Russian. The services are investigated with the help of two structured content analyses, one that focuses especially on the maps' design and tools, and another that concentrates on the study area termed 'the Heart of the Middle East'. The maps are approached with the help of semiotic, hermeneutic and deconstructive theories. The results shows that along with traditional paper maps, also web map services are connected to geopolitical agendas. The national interests are clear in that the own home country is portrayed in favorable ways, thus as colorful and in large size. In Google Maps the whole world is pictured fairly consistently, while ABmaps and Yandex Maps picture areas that are out of interest in poor detail and color. It is evident that it is easy to distort a dynamic web map and make statements on political disputes. Since web map services have a great and growing number of users, it is crucial that map users are aware of distortions that the maps possibly contain, and hereby, how the picture of the world is distorted.
  • Auvinen, Aleksi (2024)
    Deforestation is an ongoing issue worldwide, and the loss of forests, coupled with climate change, is causing significant changes in global biodiversity and ecosystem functioning. Currently, forests cover approximately 13% of the land area in the United Kingdom, making it one of the least forested countries in Europe. Reforestation efforts aim to increase forest area, ensuring the provision of ecosystem services, biodiversity, carbon storage, and species conservation. The goal in United Kingdom is to increase forest cover from 13% to 17% nationwide by 2050. However, research focusing on the impacts of climate change largely relies on large-scale climates over areas greater than 1 km². Broad-scale climates also called macroclimates affect large areas on a long-term and are spatially very broad scale. Many species, however, experience significantly different temperatures and weather conditions from macroclimates. These microrefugias created by microclimates can provide habitats for species requiring cooler conditions in changing climates. Microclimates have a lot of impact for forest ecology, as they enhance carbon sequestration, microbial activity, and decomposition processes in forests. Many different factors influence the formation of microclimates, such as solar radiation, air temperature, precipitation, soil temperature, humidity, and wind. Vegetation affects radiation and wind near the ground, creating the characteristic microclimate of each area. Buffering refers to the ability of forests to absorb or resist changes in temperature, thereby maintaining more stable temperature conditions compared to temperatures outside the forest. This study aims to find answers on the questions 1. How well can forests buffer macroclimate temperatures and create microclimates? 2.What kind of forest structures create microclimates that differ from the macroclimate? 3.Which types of forests planted in Scotland best support the creation of microrefugia? For this study, microclimate measurements and remote sensing data (TLS) were collected from 21 forest sites in England and Scotland. Macroclimate temperatures were determined using ERA5-Land data and nearby weather stations temperature data. By using linear models and statistical analyses, slope values were made for each forest plot to represent buffering. The results indicate what types of forests enhance temperature buffering and create microclimate conditions. The results indicate that broadleaf and coniferous forests effectively buffer temperatures during the leaf-on period, while their effectiveness diminishes during the leaf-off period. Broadleaf forests showed buffering during the leaf-on period but showed reduced buffering during the leaf-off period. Coniferous forests maintained better buffering during the leaf-on period and low buffering during the leaf-off period. Monoculture forests provided consistent buffering, while older and multi-age forests performed best in both periods, demonstrating the importance of structural complexity and diversity. Certain species, such as spruce, Scots pine, and oak, showed strong buffering capabilities year-round. The linear mixed-effects model confirmed that forest structural traits such as, Foliage Height Diversity and Relative Height and other factors such as hillslope, elevation, and tree type significantly influence temperature buffering. Maintaining diverse and structurally complex forests with a mix of species like spruce, Scots pine, and oak is essential for optimizing temperature buffering and creating stable microclimates and microrefugia. These forests can better withstand temperature fluctuations and provide habitats for species affected by climate change. The study highlights the importance of long-term forest growth and diverse understories in enhancing forest resilience and ecological stability. Further research is needed to understand the broader implications of forest management practices on biodiversity and ecosystem functioning. Further research is also needed in the planning of reforestation in Scotland to understand where reforestation can be most effectively implemented.
  • Pirani, Edoardo (2024)
    Agriculture is associated with one-third of global land use, and it is responsible for 21% of total greenhouse gas emissions. At the same time, food demand is going to increase, driven by population growth. Climate change adaptation and mitigation interventions in agriculture are therefore increasingly central to address soil degradation, loss of biodiversity and food insecurity, and Regenerative Agriculture is one of the alternatives proposed to the current agri-food system. Understanding the interlinkages between regenerative agriculture and positive deviance among smallholder farmers in the Taita-Taveta County, Kenya, can be beneficial to align agricultural practices with regenerative agriculture interventions that aim at adapting and mitigating farming activities to climate change, and thus can provide information to decision makers on how to support farmers in this transition. Key informant interviews (11 informants) and a household survey (96 respondents) were used to collect data. A spatial analysis allowed a comparison between three distinct agro-ecological zones, highlighting potential differences in the adoption of regenerative agriculture techniques and the strategies implemented by positive deviants. By studying how geographical factors influence the adoption of agricultural practices, this thesis situates in the field of human geography. The results suggest that both regenerative agriculture adoption and positive deviance are highly context-dependent. Positive deviants typically shifted from subsistence agriculture to high-value crops. By engaging with contract farming, they accessed reliable markets, financing, and inputs, and received private extension services. In the lowlands, positive deviants excelled at coping with water scarcity and mitigating the effects of climate change, while in the highlands they strategically ventured into horticulture at a commercial level. Overall, while regenerative agriculture practices played a role in climate-resilient agriculture, their adoption was not clearly linked with positive deviance.
  • Lapinlampi, George (2020)
    There’s a specific but sometimes quite a significant problem in time series modeling caused by changing means. First, the foundation behind the model addressing this problem is introduced in the form of the basic theory of Markov chains and problems related to hidden Markov chains. This approach builds on the ARMA (Autoregressive Moving average) model but is utilizing estimation methods from the areas not specifically dedicated to the time series analysis. The hybrid approach comprising Markov chains, EM (expectation-maximization) algorithm, and linear modeling may be well justified when the conventional methods do not seem to produce desired results and the modeler has competencies and means to attempt more sophisticated approaches. The literature review provides an insight into an earlier kind of models that have led to the development of the model investigated in this work. Finally, in the empirical part the model’s power is assessed against the conventional ARMA model. The modeling is performed on the simulated series in order to assess the functionality of the EM algorithm, to have a precise knowledge about real state variables, and to get an optimal comparison between a linear and non-linear models. The models are compared using multiple diagnostic procedures such as AIC (Akaike criterion), autocorrelation and partial autocorrelation functions, residuals variance, and other descriptive statistical measures.
  • Saikko, Paul (2015)
    Real-world optimization problems, such as those found in logistics and bioinformatics, are often NP-hard. Maximum satisfiability (MaxSAT) provides a framework within which many such problems can be efficiently represented. MaxHS is a recent exact algorithm for MaxSAT. It is a hybrid approach that uses a SAT solver to compute unsatisfiable cores and an integer programming (IP) solver to compute minimum-cost hitting sets for the found cores. This thesis analyzes and extends the MaxHS algorithm. To enable this, the algorithm is re-implemented from scratch using the C++ programming language. The resulting MaxSAT solver LMHS recently gained top positions at an international evaluation of MaxSAT solvers. This work looks into various aspects of the MaxHS algorithm and its applications. The impact of different IP solvers on the MaxHS algorithm and the behavior induced by different strategies of postponing IP solver calls is examined. New methods of enhancing the computation of unsatisfiable cores in MaxHS are examined. Fast core extraction through parallelization by partitioning soft clauses is explored. A modification of the final conflict analysis procedure of a SAT solver is used to generate additional cores without additional SAT solver invocations. The use of additional constraint propagation procedures in the SAT solver used by MaxHS is investigated. As a case study, acyclicity constraint propagation is implemented and its effectiveness for bounded treewidth Bayesian network structure learning using MaxSAT is evaluated. The extension of MaxHS to the labeled MaxSAT framework, which allows for more efficient use of preprocessing techniques and group MaxSAT encodings in MaxHS, is discussed. The re-implementation of the MaxHS algorithm, LMHS, also enables incrementality in efficiently adding constraints to a MaxSAT instance during the solving process. As a case study, this incrementality is used in solving subproblems with MaxSAT within GOBNILP, a tool for finding optimal Bayesian network structures.
  • Maaranen, Timo (2015)
    Opinnäytetyö esittelee moderneissa peleissä usein käytettyjä reitinetsintämenetelmiä sekä etsintäavaruuksia, joissa menetelmät toimivat. Se perehdyttää reitinetsinnän periaatteisiin peruasioista lähtien ja sopii siten esimerkiksi johdannoksi aiheeseen. Näkökulmana on käytännönläheinen ja siinä pyritään ottamaan huomioon pelilaitteiden rajoitukset, etsintämenetelmien toteutuksen hankaluus ja toisaalta pelien sääntöjen tuomat vaatimukset. CCS-luokitus: - Computing methodologies ~ Motion path planning - Applied computing ~ Computer games
  • Kyyhkynen, Juho (2018)
    Tämän tutkielman päämäärä on osoittaa rekursiivisten funktioiden ja lambda-määriteltävien funktioiden yhtenevyys. Molemmat funktiojoukot ovat laskettavuuden teorian syntyyn vaikuttaneita laskennan malleja, joilla kuvataan automatisoitavissa olevia prosesseja. Kurt Gödel tarvitsi rekursiivisia funktioita predikaattilogiikan todistuvuuden mekanisointiin. Lambda-määriteltävyys taas pohjautuu Alonzo Churchin ideoimaan lambdalaskentaan (engl. lambda-calculus). Se koostuu funktioita esittävistä kaavoista, ja niille määritellyistä muunnossännöistä. Lambdalaskenta on toisinaan suomennettu myös lambdakalkyyliksi. Lambda-määriteltävät funktiot on rekursiivisten funktioiden ohella osoitettu yhteneviksi useiden muidenkin laskentaformalismien kanssa. Tämän johdosta on päädytty otaksumaan, että rekursiiviset operaatiot ovat ne, joita on ylipäänsä mahdollista realisoida jollain algoritmilla tai automaatiolla. Lambdalaskennan synnyinvuosien jälkeen hiljalleen kehittyi laskettavuuden teoria, jossa tutkitaan mitä tällaisilla mekaanisilla järjestelmillä, kuten tietokoneella, voi edes periaatteessa ratkoa. Luvussa 2 esitellään rekursiivisten funktioiden perhe sekä rekursiivisesti ratkeavat relaatiot. Aikaisempi tietämys matemaattisesta logiikasta ja rekursiivisista funktioista on hyödyksi, sillä kaikkien määritelmien semanttista oikeellisuutta ei käsitellä. Luvussa 3 esitellään lambdalaskennan lausekkeet ja muunnossäännöt sekä todistetaan Churchin-Rosserin lause, joka kuittaa lambda-laskennan toimivaksi laskentajärjestelmäksi. Lisäksi todistetaan lambdatermien kiintopistelause, jota vastaava tulos rekursiivisille funktioille on huomattavasti mutkikkaamman todistuksen takana. Aikaisempaa tietämystä lambdalaskennasta ei edellytetä. Luvussa 4 esitellään lambda-määriteltävien funktioiden joukko, joka viimeisessä luvussa osoitetaan samaksi rekursiivisten funktioiden joukon kanssa. Usein laskettavuuden teoriassa sallitaan osittaiset funktiot, joiden arvoa ei kaikissa pisteissä välttämättä ole määritelty. Tässä tutkielmassa käsitellään kuitenkin vain totaaleja funktioita.
  • Kyyhkynen, Juho (2019)
    Tämän tutkielman päämäärä on osoittaa rekursiivisten funktioiden ja lambda-määriteltävien funktioiden yhtenevyys. Molemmat funktiojoukot ovat laskettavuuden teorian syntyyn vaikuttaneita laskennan malleja, joilla kuvataan automatisoitavissa olevia prosesseja. Kurt Gödel tarvitsi rekursiivisia funktioita predikaattilogiikan todistuvuuden mekanisointiin. Lambda-määriteltävyys taas pohjautuu Alonzo Churchin ideoimaan lambdalaskentaan (engl. lambda-calculus). Se koostuu funktioita esittävistä kaavoista, ja niille määritellyistä muunnossännöistä. Lambdalaskenta on toisinaan suomennettu myös lambdakalkyyliksi. Lambda-määriteltävät funktiot on rekursiivisten funktioiden ohella osoitettu yhteneviksi useiden muidenkin laskentaformalismien kanssa. Tämän johdosta on päädytty otaksumaan, että rekursiiviset operaatiot ovat ne, joita on ylipäänsä mahdollista realisoida jollain algoritmilla tai automaatiolla. Lambdalaskennan synnyinvuosien jälkeen hiljalleen kehittyi laskettavuuden teoria, jossa tutkitaan mitä tällaisilla mekaanisilla järjestelmillä, kuten tietokoneella, voi edes periaatteessa ratkoa. Luvussa 2 esitellään rekursiivisten funktioiden perhe sekä rekursiivisesti ratkeavat relaatiot. Aikaisempi tietämys matemaattisesta logiikasta ja rekursiivisista funktioista on hyödyksi, sillä kaikkien määritelmien semanttista oikeellisuutta ei käsitellä. Luvussa 3 esitellään lambdalaskennan lausekkeet ja muunnossäännöt sekä todistetaan Churchin-Rosserin lause, joka kuittaa lambda-laskennan toimivaksi laskentajärjestelmäksi. Lisäksi todistetaan lambdatermien kiintopistelause, jota vastaava tulos rekursiivisille funktioille on huomattavasti mutkikkaamman todistuksen takana. Aikaisempaa tietämystä lambdalaskennasta ei edellytetä. Luvussa 4 esitellään lambda-määriteltävien funktioiden joukko, joka viimeisessä luvussa osoitetaan samaksi rekursiivisten funktioiden joukon kanssa. Usein laskettavuuden teoriassa sallitaan osittaiset funktiot, joiden arvoa ei kaikissa pisteissä välttämättä ole määritelty. Tässä tutkielmassa käsitellään kuitenkin vain totaaleja funktioita.