Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Autio, Antti (2020)
    Hiukkasfysiikan standardimalli kuvaa alkeishiukkasia ja niiden välisiä vuorovaikutuksia. Higgsin bosonin löydön (2012) jälkeen kaikki standardimallin ennustamat hiukkaset on havaittu. Standardimalli on hyvin tarkka teoria, mutta kaikkia havaittuja asioita ei voida kuitenkaan selittää standardimallin puitteissa. Supersymmetria on yksi houkutteleva tapa laajentaa standardimallia. Matalan energian supersymmetriaa ei kuitenkaan ole havaittu. Supersymmetria vaatii toimiakseen niin sanotun kahden Higgsin dubletin mallin. Tavallisessa standardimallissa on yksi Higgsin dublettikenttä. Higgsin dubletissa on kaksi kompleksista kenttää eli yhteensä neljä vapausastetta, joten voisi olettaa, että siitä syntyy neljä hiukkasta. Kolme vapausasteista kuitenkin sitoutuu välibosoneihin W+, W− ja Z, jolloin jäljelle jää yksi Higgsin bosoni. Kahden Higgsin dubletin malleissa dublettikenttiä on kaksi. Koska se lisää teoriaan yhden neljän vapausasteen dubletin, Higgsin hiukkasia on siinä kaiken kaikkiaan viisi: kolme sähköisesti neutraalia (h, H ja A) sekä kaksi sähköisesti varattua (H+ ja H−). Tässä työssä keskitytään varattujen Higgsin hiukkasten etsintään malliriippumattomasti. Tutkimuksessa käytetään LHC-kiihdyttimen (Large Hadron Collider, suuri hadronitörmäytin) CMS-ilmaisimen (Compact Muon Solenoid, kompakti myonisolenoidi) keräämää dataa. Sähkövarauksellisten Higgsin bosonien etsintä keskittyy lopputiloihin, joissa varattu Higgsin bosoni hajoaa hadroniseksi tau-leptoniksi (eli tau-leptoniksi, joka puolestaan hajoaa hadroneiksi) sekä taun neutriinoksi. Niin sanottu liipaisu on tapa suodattaa dataa tallennusvaiheessa, sillä dataa tulee törmäyksistä niin paljon, ettei kaiken tallentaminen ole mahdollista. Eri liipaisimet hyväksyvät törmäystapauksia eri kriteerien perusteella. Liipaisusta aiheutuu merkittäviä systemaattisia epävarmuuksia. Tässä työssä liipaisun epävarmuuksia pyritään pienentämään käyttämällä sellaisia liipaisimia, joiden epävarmuudet ovat pienempiä. Tätä varten analyysi on jaettava riippumattomiin osiin, joiden epävarmuudet käsitellään erikseen. Lopuksi osat yhdistetään tilastollisesti toisiinsa, jolloin kokonaisepävarmuuden oletetaan pienenevän. Tässä työssä tutkitaan, pieneneekö tämä epävarmuus ja kuinka paljon. Näitä menetelmiä käyttäen kykenimme löytämään pieniä parannuksia analyysin tarkkuuteen raskaiden varattujen Higgsin bosonien kohdalla. Lisäksi odotettu raja, jota suurempi varatun Higgsin hiukkasen tuotto tässä lopputilassa olisi havaittavissa, paranee yllättävästi. Tätä rajan paranemista tutkitaan liipaisua emuloimalla. Työ on tarkoitus sisällyttää koko Run2:n datasta julkaistaviin tuloksiin.
  • Anttila, Kamilla (2020)
    Most machine learning projects consist of four distinct phases: data preparation, model training, model validation, and inference serving. Even though all of these phases are vital components of a successful machine learning project, the focus of most machine learning work is solely on the training of models. The other phases often need to be implemented with ad-hoc solutions, which can easily lead to technical debt. Technical debt is a metaphor for describing the quality of a software project. It describes the state of a project by comparing it to a financial loan. During software development, a loan can be taken to add value to the present state of the system. However, the loan comes with interest and has to be payed back. A loan can be taken, for example, by writing low quality code to meet a deadline. The loan has to be payed back by rewriting the code later, or else it will start to grow interest. The interest can be seen in the code functioning poorly or requiring substantial amounts of time to be understood. If a loan is not payed back, the interest keeps increasing, making it more and more difficult to pay the loan back later. In this thesis, we study the effect machine learning frameworks have on technical debt. We describe the machine learning project lifecycle and the various sources of technical debt associated with it. We review available machine learning frameworks and their mitigation strategies for the technical debt in machine learning projects. Our insights demonstrate how frameworks can be used to reduce the overall technical debt in machine learning projects.
  • Mäenpää, Hanna (2013)
    The change from pre-descriptive 'waterfall' software process into iterative and incremental models has created a need for redefinition of software requirements engineering. Agile methodologies have emerged to support the paradigm shift by treating the symptoms: emphasizing change management and customer collaboration to embrace volatility of requirements and priorities whilst in development. However, it has been recognized that fast-paced agile development does not provide sufficient support for initial or long-term planning of the software product. Research and practitioner literature have started to address the need with the concept of a high-level definition of the software project's outcome: the software Product Vision. In this thesis, uncertainty in new product development is studied from the perspective of Innovation Management. As a vehicle for reducing uncertainty in software projects, the concept of an software Product Vision (reason for the project's existence) is chosen to be examined from the viewpoints of New Product Development and Software Engineering literature. The work describes sources of uncertainty in software projects and explains the effects of a mutually understood software Product Vision on software project performance and end-product acceptance. Key parameters for an interdisciplinary and unified software Product Vision are identified by studying four existing and one emergent Product Vision models. Finally, a new Product Vision framework (InnCa) is created based on semantic analysis. The framework's applicability on software projects in evaluated in three participatory action research -case studies. As a result, it is concluded that common parameters of a interdisciplinary 'Product Vision' can be identified. The framework created can be used to ideate, rapidly capture, iterate and analyze vague software ideas. It is applicable for sharing knowledge about the project's high-level goals amongst the project's stakeholders. However, it is not argued in this thesis that the framework could be used in all kinds of projects and circumstances. While uncertainty in software projects is a chaotic and complex phenomenon, no 'silver bullet' can address all situations.The topic of software Product Vision may prove grounds for further research, possibly leading to practical tools for assessing and quantifying uncertainty about goals during a software project's trajectory.
  • Brandtberg, Ronnie (2020)
    Re-engineering can be described as a process for updating an existing system in order to meet new requirements. Restructuring and refactoring are activities that can be performed as a part of the re-engineering process. Supporting new requirements like migrating to new frameworks, new environments and architectural styles is essential for preservation of quality attributes like maintainability and evolvability. Many larger legacy systems slowly deteriorate over time in quality and adding new functionality becomes increasingly difficult and costly as technical debt accumulates. To modernize a legacy system and improve the cost effectiveness of implementing new features a re-engineering process is often needed. The alternative is to develop a completely new system but this can often lead to loss of years of accumulated functionality and be too expensive. Re-engineering strategies can be specialized and solve specific needs like cloud migration or be more generic in nature supporting several kinds of needs. Different approaches are suitable for different kinds of source and target systems. The choice of a re-engineering strategy is also influenced by organisational and business factors. The re-engineering of a highly tailored legacy system in a small organisation is different from re-engineering a scalable system in a large organisation. Generic and flexible solutions are well suited for especially smaller organisations with complex systems. The re-engineering strategy Renaissance was applied in a case study at Roima Intelligence Oy in order to find out if such a strategy is realistically usable, useful and valuable for a smaller organization. The results show that a re-engineering strategy is possible to be used with low overhead in order to prioritize different parts of the system and determining a suitable modernization plan. Renaissance was also shown to add value especially in the form of deeper understanding of the system and a structured way to evaluate different options for modernization. This is achieved through assessing the system from different views taking into account especially business and technical aspects. A lesson learned about Renaissance is that determining an optimal scope for the system assessment is challenging. The results are applicable for other organisations dealing with complex legacy systems with constrained resources. Limitations of the study are that the number of different kinds of re-engineering strategies discussed is small and more suitable strategies than Renaissance could be discovered with a systematic mapping study. The amount of experts participating in the process itself as well as the evaluation was also low, introducing some uncertainty to the validity of the results. Further research is needed in order to determine how specialized and generic re-engineering strategies compare in terms of needed resources and added value.
  • Ihalainen, Hannes (2022)
    The so-called declarative approach has proven to be a viable paradigm for solving various real-world NP-hard optimization problems in practice. In the declarative approach, the problem at hand is encoded using a mathematical constraint language, and an algorithm for the specific language is employed to obtain optimal solutions to an instance of the problem. One of the most viable declarative optimization paradigms of the last years is maximum satisfiability (MaxSAT) with propositional logic as the constraint language. So-called core-guided MaxSAT algorithms are arguably one of the most effective MaxSAT-solving paradigms in practice today. Core-guided algorithms iteratively detect and rule out (relax) sources of inconsistencies (so-called unsatisfiable cores) in the instance being solved. Especially effective are recent algorithmic variants of the core-guided approach which employ so-called soft cardinality constraints for ruling out inconsistencies. In this thesis, we present a structure-sharing technique for the cardinality-based core relaxation steps performed by core-guided MaxSAT solvers. The technique aims at reducing the inherent growth in the size of the propositional formula resulting from the core relaxation steps. Additionally, it enables more efficient reasoning over the relationships between different cores. We empirically evaluate the proposed technique on two different core-guided algorithms and provide open-source implementations of our solvers employing the technique. Our results show that the proposed structure-sharing can improve the performance of the algorithms both in theory and in practice.
  • Andersson, Elina (2013)
    Critical cartographic research has proved that maps are connected to national geopolitics and that they reflect local, national interests and agendas. The map designer and his background can affect the contents and design of a map to a great extent. Research in critical cartography has so far mostly concentrated on traditional paper maps, but nowadays maps are to a growing extent read on the Web as the World Wide Web has gained success. Because of the extending power that web maps have, it is of great importance to investigate if and how web maps are connected to geopolitical agendas and interests, and how they picture the world. This research investigates three web map services that are free-of-charge and theoretically open for anyone at any time. The map services originate in different parts of the world: ABmaps is Israeli, Google Maps is American and Yandex Maps is Russian. The services are investigated with the help of two structured content analyses, one that focuses especially on the maps' design and tools, and another that concentrates on the study area termed 'the Heart of the Middle East'. The maps are approached with the help of semiotic, hermeneutic and deconstructive theories. The results shows that along with traditional paper maps, also web map services are connected to geopolitical agendas. The national interests are clear in that the own home country is portrayed in favorable ways, thus as colorful and in large size. In Google Maps the whole world is pictured fairly consistently, while ABmaps and Yandex Maps picture areas that are out of interest in poor detail and color. It is evident that it is easy to distort a dynamic web map and make statements on political disputes. Since web map services have a great and growing number of users, it is crucial that map users are aware of distortions that the maps possibly contain, and hereby, how the picture of the world is distorted.
  • Auvinen, Aleksi (2024)
    Deforestation is an ongoing issue worldwide, and the loss of forests, coupled with climate change, is causing significant changes in global biodiversity and ecosystem functioning. Currently, forests cover approximately 13% of the land area in the United Kingdom, making it one of the least forested countries in Europe. Reforestation efforts aim to increase forest area, ensuring the provision of ecosystem services, biodiversity, carbon storage, and species conservation. The goal in United Kingdom is to increase forest cover from 13% to 17% nationwide by 2050. However, research focusing on the impacts of climate change largely relies on large-scale climates over areas greater than 1 km². Broad-scale climates also called macroclimates affect large areas on a long-term and are spatially very broad scale. Many species, however, experience significantly different temperatures and weather conditions from macroclimates. These microrefugias created by microclimates can provide habitats for species requiring cooler conditions in changing climates. Microclimates have a lot of impact for forest ecology, as they enhance carbon sequestration, microbial activity, and decomposition processes in forests. Many different factors influence the formation of microclimates, such as solar radiation, air temperature, precipitation, soil temperature, humidity, and wind. Vegetation affects radiation and wind near the ground, creating the characteristic microclimate of each area. Buffering refers to the ability of forests to absorb or resist changes in temperature, thereby maintaining more stable temperature conditions compared to temperatures outside the forest. This study aims to find answers on the questions 1. How well can forests buffer macroclimate temperatures and create microclimates? 2.What kind of forest structures create microclimates that differ from the macroclimate? 3.Which types of forests planted in Scotland best support the creation of microrefugia? For this study, microclimate measurements and remote sensing data (TLS) were collected from 21 forest sites in England and Scotland. Macroclimate temperatures were determined using ERA5-Land data and nearby weather stations temperature data. By using linear models and statistical analyses, slope values were made for each forest plot to represent buffering. The results indicate what types of forests enhance temperature buffering and create microclimate conditions. The results indicate that broadleaf and coniferous forests effectively buffer temperatures during the leaf-on period, while their effectiveness diminishes during the leaf-off period. Broadleaf forests showed buffering during the leaf-on period but showed reduced buffering during the leaf-off period. Coniferous forests maintained better buffering during the leaf-on period and low buffering during the leaf-off period. Monoculture forests provided consistent buffering, while older and multi-age forests performed best in both periods, demonstrating the importance of structural complexity and diversity. Certain species, such as spruce, Scots pine, and oak, showed strong buffering capabilities year-round. The linear mixed-effects model confirmed that forest structural traits such as, Foliage Height Diversity and Relative Height and other factors such as hillslope, elevation, and tree type significantly influence temperature buffering. Maintaining diverse and structurally complex forests with a mix of species like spruce, Scots pine, and oak is essential for optimizing temperature buffering and creating stable microclimates and microrefugia. These forests can better withstand temperature fluctuations and provide habitats for species affected by climate change. The study highlights the importance of long-term forest growth and diverse understories in enhancing forest resilience and ecological stability. Further research is needed to understand the broader implications of forest management practices on biodiversity and ecosystem functioning. Further research is also needed in the planning of reforestation in Scotland to understand where reforestation can be most effectively implemented.
  • Pirani, Edoardo (2024)
    Agriculture is associated with one-third of global land use, and it is responsible for 21% of total greenhouse gas emissions. At the same time, food demand is going to increase, driven by population growth. Climate change adaptation and mitigation interventions in agriculture are therefore increasingly central to address soil degradation, loss of biodiversity and food insecurity, and Regenerative Agriculture is one of the alternatives proposed to the current agri-food system. Understanding the interlinkages between regenerative agriculture and positive deviance among smallholder farmers in the Taita-Taveta County, Kenya, can be beneficial to align agricultural practices with regenerative agriculture interventions that aim at adapting and mitigating farming activities to climate change, and thus can provide information to decision makers on how to support farmers in this transition. Key informant interviews (11 informants) and a household survey (96 respondents) were used to collect data. A spatial analysis allowed a comparison between three distinct agro-ecological zones, highlighting potential differences in the adoption of regenerative agriculture techniques and the strategies implemented by positive deviants. By studying how geographical factors influence the adoption of agricultural practices, this thesis situates in the field of human geography. The results suggest that both regenerative agriculture adoption and positive deviance are highly context-dependent. Positive deviants typically shifted from subsistence agriculture to high-value crops. By engaging with contract farming, they accessed reliable markets, financing, and inputs, and received private extension services. In the lowlands, positive deviants excelled at coping with water scarcity and mitigating the effects of climate change, while in the highlands they strategically ventured into horticulture at a commercial level. Overall, while regenerative agriculture practices played a role in climate-resilient agriculture, their adoption was not clearly linked with positive deviance.
  • Lapinlampi, George (2020)
    There’s a specific but sometimes quite a significant problem in time series modeling caused by changing means. First, the foundation behind the model addressing this problem is introduced in the form of the basic theory of Markov chains and problems related to hidden Markov chains. This approach builds on the ARMA (Autoregressive Moving average) model but is utilizing estimation methods from the areas not specifically dedicated to the time series analysis. The hybrid approach comprising Markov chains, EM (expectation-maximization) algorithm, and linear modeling may be well justified when the conventional methods do not seem to produce desired results and the modeler has competencies and means to attempt more sophisticated approaches. The literature review provides an insight into an earlier kind of models that have led to the development of the model investigated in this work. Finally, in the empirical part the model’s power is assessed against the conventional ARMA model. The modeling is performed on the simulated series in order to assess the functionality of the EM algorithm, to have a precise knowledge about real state variables, and to get an optimal comparison between a linear and non-linear models. The models are compared using multiple diagnostic procedures such as AIC (Akaike criterion), autocorrelation and partial autocorrelation functions, residuals variance, and other descriptive statistical measures.
  • Saikko, Paul (2015)
    Real-world optimization problems, such as those found in logistics and bioinformatics, are often NP-hard. Maximum satisfiability (MaxSAT) provides a framework within which many such problems can be efficiently represented. MaxHS is a recent exact algorithm for MaxSAT. It is a hybrid approach that uses a SAT solver to compute unsatisfiable cores and an integer programming (IP) solver to compute minimum-cost hitting sets for the found cores. This thesis analyzes and extends the MaxHS algorithm. To enable this, the algorithm is re-implemented from scratch using the C++ programming language. The resulting MaxSAT solver LMHS recently gained top positions at an international evaluation of MaxSAT solvers. This work looks into various aspects of the MaxHS algorithm and its applications. The impact of different IP solvers on the MaxHS algorithm and the behavior induced by different strategies of postponing IP solver calls is examined. New methods of enhancing the computation of unsatisfiable cores in MaxHS are examined. Fast core extraction through parallelization by partitioning soft clauses is explored. A modification of the final conflict analysis procedure of a SAT solver is used to generate additional cores without additional SAT solver invocations. The use of additional constraint propagation procedures in the SAT solver used by MaxHS is investigated. As a case study, acyclicity constraint propagation is implemented and its effectiveness for bounded treewidth Bayesian network structure learning using MaxSAT is evaluated. The extension of MaxHS to the labeled MaxSAT framework, which allows for more efficient use of preprocessing techniques and group MaxSAT encodings in MaxHS, is discussed. The re-implementation of the MaxHS algorithm, LMHS, also enables incrementality in efficiently adding constraints to a MaxSAT instance during the solving process. As a case study, this incrementality is used in solving subproblems with MaxSAT within GOBNILP, a tool for finding optimal Bayesian network structures.
  • Maaranen, Timo (2015)
    Opinnäytetyö esittelee moderneissa peleissä usein käytettyjä reitinetsintämenetelmiä sekä etsintäavaruuksia, joissa menetelmät toimivat. Se perehdyttää reitinetsinnän periaatteisiin peruasioista lähtien ja sopii siten esimerkiksi johdannoksi aiheeseen. Näkökulmana on käytännönläheinen ja siinä pyritään ottamaan huomioon pelilaitteiden rajoitukset, etsintämenetelmien toteutuksen hankaluus ja toisaalta pelien sääntöjen tuomat vaatimukset. CCS-luokitus: - Computing methodologies ~ Motion path planning - Applied computing ~ Computer games
  • Kyyhkynen, Juho (2018)
    Tämän tutkielman päämäärä on osoittaa rekursiivisten funktioiden ja lambda-määriteltävien funktioiden yhtenevyys. Molemmat funktiojoukot ovat laskettavuuden teorian syntyyn vaikuttaneita laskennan malleja, joilla kuvataan automatisoitavissa olevia prosesseja. Kurt Gödel tarvitsi rekursiivisia funktioita predikaattilogiikan todistuvuuden mekanisointiin. Lambda-määriteltävyys taas pohjautuu Alonzo Churchin ideoimaan lambdalaskentaan (engl. lambda-calculus). Se koostuu funktioita esittävistä kaavoista, ja niille määritellyistä muunnossännöistä. Lambdalaskenta on toisinaan suomennettu myös lambdakalkyyliksi. Lambda-määriteltävät funktiot on rekursiivisten funktioiden ohella osoitettu yhteneviksi useiden muidenkin laskentaformalismien kanssa. Tämän johdosta on päädytty otaksumaan, että rekursiiviset operaatiot ovat ne, joita on ylipäänsä mahdollista realisoida jollain algoritmilla tai automaatiolla. Lambdalaskennan synnyinvuosien jälkeen hiljalleen kehittyi laskettavuuden teoria, jossa tutkitaan mitä tällaisilla mekaanisilla järjestelmillä, kuten tietokoneella, voi edes periaatteessa ratkoa. Luvussa 2 esitellään rekursiivisten funktioiden perhe sekä rekursiivisesti ratkeavat relaatiot. Aikaisempi tietämys matemaattisesta logiikasta ja rekursiivisista funktioista on hyödyksi, sillä kaikkien määritelmien semanttista oikeellisuutta ei käsitellä. Luvussa 3 esitellään lambdalaskennan lausekkeet ja muunnossäännöt sekä todistetaan Churchin-Rosserin lause, joka kuittaa lambda-laskennan toimivaksi laskentajärjestelmäksi. Lisäksi todistetaan lambdatermien kiintopistelause, jota vastaava tulos rekursiivisille funktioille on huomattavasti mutkikkaamman todistuksen takana. Aikaisempaa tietämystä lambdalaskennasta ei edellytetä. Luvussa 4 esitellään lambda-määriteltävien funktioiden joukko, joka viimeisessä luvussa osoitetaan samaksi rekursiivisten funktioiden joukon kanssa. Usein laskettavuuden teoriassa sallitaan osittaiset funktiot, joiden arvoa ei kaikissa pisteissä välttämättä ole määritelty. Tässä tutkielmassa käsitellään kuitenkin vain totaaleja funktioita.
  • Kyyhkynen, Juho (2019)
    Tämän tutkielman päämäärä on osoittaa rekursiivisten funktioiden ja lambda-määriteltävien funktioiden yhtenevyys. Molemmat funktiojoukot ovat laskettavuuden teorian syntyyn vaikuttaneita laskennan malleja, joilla kuvataan automatisoitavissa olevia prosesseja. Kurt Gödel tarvitsi rekursiivisia funktioita predikaattilogiikan todistuvuuden mekanisointiin. Lambda-määriteltävyys taas pohjautuu Alonzo Churchin ideoimaan lambdalaskentaan (engl. lambda-calculus). Se koostuu funktioita esittävistä kaavoista, ja niille määritellyistä muunnossännöistä. Lambdalaskenta on toisinaan suomennettu myös lambdakalkyyliksi. Lambda-määriteltävät funktiot on rekursiivisten funktioiden ohella osoitettu yhteneviksi useiden muidenkin laskentaformalismien kanssa. Tämän johdosta on päädytty otaksumaan, että rekursiiviset operaatiot ovat ne, joita on ylipäänsä mahdollista realisoida jollain algoritmilla tai automaatiolla. Lambdalaskennan synnyinvuosien jälkeen hiljalleen kehittyi laskettavuuden teoria, jossa tutkitaan mitä tällaisilla mekaanisilla järjestelmillä, kuten tietokoneella, voi edes periaatteessa ratkoa. Luvussa 2 esitellään rekursiivisten funktioiden perhe sekä rekursiivisesti ratkeavat relaatiot. Aikaisempi tietämys matemaattisesta logiikasta ja rekursiivisista funktioista on hyödyksi, sillä kaikkien määritelmien semanttista oikeellisuutta ei käsitellä. Luvussa 3 esitellään lambdalaskennan lausekkeet ja muunnossäännöt sekä todistetaan Churchin-Rosserin lause, joka kuittaa lambda-laskennan toimivaksi laskentajärjestelmäksi. Lisäksi todistetaan lambdatermien kiintopistelause, jota vastaava tulos rekursiivisille funktioille on huomattavasti mutkikkaamman todistuksen takana. Aikaisempaa tietämystä lambdalaskennasta ei edellytetä. Luvussa 4 esitellään lambda-määriteltävien funktioiden joukko, joka viimeisessä luvussa osoitetaan samaksi rekursiivisten funktioiden joukon kanssa. Usein laskettavuuden teoriassa sallitaan osittaiset funktiot, joiden arvoa ei kaikissa pisteissä välttämättä ole määritelty. Tässä tutkielmassa käsitellään kuitenkin vain totaaleja funktioita.
  • Mattila, Anne (2017)
    Oppimisen eräs suurin haaste on opiskelijoiden ennakkokäsitykset tieteellisistä ilmiöistä. Käsitykset ovat yleensä vain osittain lähellä tieteellistä käsitystä tai toisinaan jopa täysin ristiriitaisia sen kanssa. Uudempi tutkimus on keskittynyt selvittämään kognitiivisia tekijöitä ennakkokäsitysten muodostumisen taustalla ja on korostanut relaationaalisen tiedon merkitystä käsitteiden oppimisessa. Relaatioita ja relaationaalista tietoa voidaan siis pitää keskeisessä roolissa myös fysiikan käsitteiden oppimisessa, sillä käsitteiden merkitys rakentuu niiden sisäisen relaatiorakenteen perusteella. Tässä tutkimuksessa selvitettiin yliopisto-opiskelijoiden käsityksiä sähkötehosta siihen liittyvien käsitteiden avulla. Aihe on kiinnostava ja ajankohtainen, sillä aiempaa tutkimusta sähkötehosta ei juurikaan ole. Tutkimuksessa tutkittiin käsitteiden lisäksi niiden yhteydessä käytettyjä kuvailevia sanoja sekä niiden välisiä relaatioita ja kausaalisia suhteita. Tutkimuksen aineisto kerättiin haastattelujen avulla, jotka toteutettiin fysiikan pää- ja sivuaine opettajaopiskelijoille. Haastattelut videoitiin. Haastattelussa oli tutoriaalitehtävistä koostuva perusta, mutta haastattelijat eivät käyttäneet ennalta sovittuja kysymyksiä, vaan haastattelu eteni opiskelijoiden selitysten pohjalta. Tutoriaalitehtävissä opiskelijoiden tuli asettaa kytkentäkaavioiksi piirrettyjen virtapiirien lamput kirkkausjärjestykseen. He pohtivat ensin vastauksia itsenäisesti ja sen jälkeen keskustelivat niistä kolmen-neljän hengen pienryhmissä. Pohdinnan jälkeen he rakensivat kytkentäkaavioista virtapiirit ja vertailivat ennusteitaan tekemiinsä havaintoihin. Videoidut haastattelut litteroitiin ja aineistolle tehtiin aineistolähtöinen laadullinen analyysi. Aineistosta etsittiin opiskelijoiden sähkötehon yhteydessä käyttämiä käsitteitä, käsitteisiin liitettyjä attribuutteja sekä käsitteiden välisiä relaationaalisia suhteita. Aineistosta muodostettiin seitsemän eritasoista selitysmallia sähköteholle. Tutkimuksessa saatujen tulosten perusteella havaittiin, että relaationaalisen tiedon hallinnalla on merkitystä kattavien selitysmallien muodostumisessa. Kehittyneimmät selitysmallit olivat lähimpänä tieteellistä käsitystä sekä sisälsivät eniten relaatioita ja toisistaan eriytyneitä käsitteitä. Kuitenkin suurin osa opiskelijoista käytti selityksissään vain yksinkertaisia relaatioita tai perusteli virtapiirien toimintaa opittujen laskukaavojen avulla tarkemmin erittelemättä mitä ne tarkoittavat. Heidän selitysmalleistaan voidaan päätellä, että puutteellinen relaationaalinen ymmärrys saattaa olla eräs syy tieteellisten käsitysten oppimisvaikeuksien taustalla.
  • Toikka, Nico (2023)
    Particle jets are formed in high energy proton-proton collisions and then measured by particle physics experiments. These jets, initiated by the splitting and hadronization of color charged quarks and gluons, serve as important signatures of the strong force and provide a view to size scales smaller than the size of an atom. So, understanding jets, their behaviour and structure, is a path to understanding one of the four fundamental forces in the known universe. But, it is not only the strong force that is of interest. Studies of Standard Model physics and beyond Standard Model physics require a precise measurement of the energies of final state particles, represented often as jets, to understand our existing theories, to search for new physics hidden among our current experiments and to directly probe for the new physics. As experimentally reconstructed objects the measured jets require calibration. At the CMS experiment the jets are calibrated to the particle level jet energy scale and their resolution is determined to achieve the experimental goals of precision and understanding. During the many-step process of calibration, the position, energy and structure of the jets' are taken into account to provide the most accurate calibration possible. It is also of great importance, whether the jet is initiated by a gluon or a quark, as this affects the jets structure, distribution of energy among its constituents and the number of constituents. These differences cause disparities when calibrating the jets. Understanding of jets at the theory level is also important for simulation, which is utilized heavily during calibration and represents our current theoretical understanding of particle physics. This thesis presents a measurement of the relative response between light quark (up, down and strange) and gluon jets from the data of CMS experiment measured during 2018. The relative response is a measure of calibration between the objects and helps to show where the difference of quark and gluon jets is the largest. The discrimination between light quarks and gluons is performed with machine learning tools, and the relative response is compared at multiple stages of reconstruction to see how different effects affect the response. The dijet sample that is used in this study provides a full view of the phase space in pT and |eta|, with analysis covering both quark and gluon dominated regions of the space. These studies can then be continued with similar investigations of other samples, with the possibility of using the combined results as part of the calibration chain.
  • Lähde, Timo (Helsingin yliopistoUniversity of HelsinkiHelsingfors universitet, 2000)
  • Suomela, Jukka (Helsingin yliopistoUniversity of HelsinkiHelsingfors universitet, 2005)
  • Rämö, Miia (2020)
    In news agencies, there is a growing interest towards automated journalism. Majority of the systems applied are template- or rule-based, as they are expected to produce accurate and fluent output transparently. However, this approach often leads to output that lacks variety. To overcome this issue, I propose two approaches. In the lexicalization approach new words are included in the sentences, and in relexicalization approach some existing words are replaced with synonyms. Both of the approaches utilize contextual word embeddings for finding suitable words. Furthermore, the above approaches require linguistic resources, which are only available for high- resource languages. Thus, I present variants of the (re)lexicalization approaches that allow their utilization for low-resource languages. These variants utilize cross-lingual word embeddings to access linguistic resources of a high-resource language. The high-resource variants achieved promising results. However, the sampling of words should be further enhanced to improve reliability. The low-resource variants did show some promising results, but the quality suffered from complex morphology of the example language. This is a clear next issue to address and resolving it is expected to significantly improve the results.
  • Nissilä, Raisa (2015)
    The ghost town of Varosha is a district of the city of Famagusta, located in the southeast coast of Cyprus. The thesis explores how Varosha is remembered on a specific, now removed, Facebook page called 'Varosha, Cyprus - What they don't want you to see'. It concentrates on the following questions: Why is it important to remember Varosha? How is Varosha pictured before and after the Turkish intervention/invasion of 1974, and how is the district's future seen? What memories are involved in 'specific places' and what are these places? The research material consists of postings and comments on the page, made during a two and a half year long data collection period. The research was conducted by using a thematic analysis. The research approach is deductive, in other words, theory-driven. The study pays more attention to those themes that were repeated by several commentators. A lot of citations is used to back up the notions. Varosha is nowadays a remembered place and a community which cannot be visited by public. It represents a powerful memory to many people, whether they have experienced it first-hand or not. The meanings associated with a place vary according to one's relation to the place. Outsiders want to know how Varosha was and is like inside the fences. The stories that the residents and visitors tell and the feelings and memories they share keep the place alive. The page was aimed at keeping the memory of Varosha alive, getting the word of the district's situation out to the world and exposing what the Turks have done to Varosha by providing photographic evidence. There were some topics that the administrators of the page had ruled out of the discussion. For example, disrespecting the feelings of the Varoshotes led to the removal of several postings, comments and commentators. Words that were used in describing Varosha's current state reflect the bitterness, sorrow and anger that many of the commentators and the administrators were feeling, whereas the words and descriptions connected to pre-1974 Varosha were all very positive. The depictions were divided according to the commentators' relation to the place. The idea of returning to Varosha is fuelled by nostalgia and the feeling of belonging to the fenced off area. For the old residents and their offspring - second generation 'Varoshotes' it is not (just) about returning Varosha habitable again but also in some ways recreating the old community which was forcefully displaced over four decades ago. The research could be duplicated into another Facebook page focused on remembering a place. However, Varosha's special history combined with the on-going conflict in Cyprus have created somewhat special conditions for remembering. The page would also have provided material for studying otherness and hate speech.
  • Normo, Sanna (2023)
    Coronal mass ejections (CMEs) are large eruptions of magnetized plasma from the solar corona. Fast CMEs can drive shock waves which are capable of accelerating charged particles to high energies. These accelerated particles emit electromagnetic radiation, including radio emission. Studying radio emission associated with CME-driven shocks offers a way to remotely investigate shock-accelerated electrons as well the shock itself. Solar radio bursts are transient events where the radio emission of the Sun rises above the background level. A classical division based on their appearance in a dynamic spectrum divides solar radio bursts into five categories: types I-V. Of these five different types, type II and type IV radio bursts are most commonly associated with CMEs. Occasionally, type II radio bursts exhibit a bursty fine structure known as herringbones. These are regarded as signatures of individual electron beams accelerated by CME-driven shocks. This thesis studies the radio emission associated with a CME that erupted on 1 September 2014. White-light imaging of the CME revealed a prominent shock wave. Simultaneously, the dynamic spectrum exhibited spike-like radio emission resembling herringbones. The aim of the study presented in this thesis is to find the source location of this radio emission relative to a three dimensional reconstruction of the shock. The source location of the radio emission can be used to conclude the likely origin of the electrons responsible for it. Additionally, in situ electron flux measurements are investigated in an attempt to connect the remote and in situ detections of energetic electrons. Using interferometric radio observations of the Sun and reconstructing the CME shock in three dimension revealed the location of the radio emission to be at the flank of the CME-driven shock. Such location suggests that the spike-like radio emission observed in the dynamic spectrum originates from shock-accelerated electrons. The location of the radio emission at the flanks of the CME shock was also used to get an estimation of the lateral expansion of the CME. Although the in situ electron flux measurements detected high-energy electrons, their inferred release time at the Sun did not coincide with observed radio emission.