Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Ekblom, Madeleine (2017)
    Numeriska väderprognosmodeller (NWP, numerical weather prediction) innehåller variabler, såsom vindvektorer, temperatur och tryck, samt modellparametrar. Modellparametrar är en oundviklig del av en NWP då de behövs i parametriseringsscheman av småskaliga fysikaliska processer. Modellparametrarna har betydelse för modellens förmåga att prognostisera vädret och det är därav av stor vikt att parametrarna har optimala värden. Värdet på parametrarna bestäms manuellt med hjälp av observationer och expertkunskap, vilket är en arbetsdryg process. För att minska på arbetsbördan och frigöra resurser för annat är det möjligt att använda sig av algoritmer för att bestämma de optimala värdena på modellparametrarna. I den här studien undersöker vi en statistisk algoritm som används i ett ensembleprognossystem. Algoritmen grundar sig på att uppdatera och förbättra parametervärdena baserat på observationer och en kostnadsfunktion. Tidigare resultat har visat att algoritmen ger goda resultat med en ensemble på 50 medlemmar. I den här studien undersöks huruvida det är möjligt att minska ensemblestorleken vid optimering av två parametrar och samtidigt uppnå goda resultat. I experimenten används Lorenz96-modellen för att emulera NWP och ensemblestorleken varieras mellan två och 50. Resultaten tyder på att en ensemblestorlek på fyra ger goda resultat då utgångsläget är optimalt och vid mindre optimala förhållanden krävs flera ensemblemedlemmar; i det här fallet tio. Baserat på de här experimenten kan vi dra slutsatsen att algoritmen inte begränsas av ensemblestorleken utan snarare av modellen och utgångsläget vid parameteroptimering. Ju bättre utgångsläget är, desto mindre ensemble är möjlig att använda. Experimenten indikerar på att motsvarande resultat är att vänta för NWP.
  • Corner, Joona (2023)
    The aim of this work is to develop and optimise an atmospheric inverse modelling system to estimate local methane (CH4) emissions in peatlands. Peatlands are a major source of CH4 regionally in boreal areas and they have significance on a global scale as a soil carbon storage. Data assimilation in the inverse modelling system is based on an ensemble Kalman filter (EnKF) which is widely used in global and regional atmospheric inverse models. The EnKF in this study is an implementation of the EnKF used in the global atmospheric inversion model CarbonTracker Europe-CH4 (CTE-CH4) applied to local setting in the peatland. Consistency of the methodology with regional and global models means that it is possible to expand the system in scale. Siikaneva fen in Southern Finland is used as a testbed for the optimisation of the system. Prior natural CH4 fluxes in Siikaneva are acquired from the HelsinkI Model of MEthane buiLd-up and emIssion for peatland (HIMMELI) which simulates exchange of gases in peatlands. In addition to the peatland fluxes, anthropogenic fluxes at the site are estimated as well in the inversion. For the assimilation of atmospheric CH4 concentration observations, the CH4 fluxes are transformed into atmospheric concentration with a simple one-dimensional box model. The optimisation of the system was done by changing parameters in the model which affect the data assimilation. In model optimisation tests it was discovered that the performance of the modelling system is unstable. There was large variability in the produced estimates between consecutive model runs. Model evaluation statistics did not indicate improvement of the estimates after the inversion. No exact reason for the unstability was able to be determined. Posterior estimates of CH4 fluxes for years 2012–2015 did not differ much from prior estimates and they had large uncertainty. However, evaluation against flux measurements showed reasonable agreement and posterior concentration estimates were within the uncertainty range of the observed concentration.
  • Latva-Äijö, Salla-Maaria (2016)
    Optimization of contrast enhanced CT (computed tomography) of the intra-abdominal lymphatic system in the case of a chylous leakage is the subject of the study. The chylous leakage means a pathologic condition, where the lymphatic liquid (chyle) leaks out from the vessels of the lymphatic system in intra-abdominal space. The diagnose is rare. CT imaging is based on a different attenuation of the X-rays when they are penetrating through tissues of different kind. Tissues with greater density, like iodine, make the attenuation stronger and they can be used to improve the contrast in the image. Iodinated contrast enhancing agent Lipiodol was chosen for this study for increasing contrast in the CT images of the intestines. The right ratio of the Lipiodol–oil mixing was searched so that the agent would end up in the lymphatic system. The idea was to visualize the lymphatic system and spot the leakage point. This could be useful information when thinking about the follow-up treatment, especially in the case of surgical operation. The examinations were implemented by imaging plastic phantoms with CT device. The CT images were taken with head and torso phantoms, including different mixing ratios of Lipiodol–oil mixtures. The most suitable mixing ratio of Lipiodol and oil was then estimated from the taken phantom pictures. After that, the results were taken in use with two chylous leakage patients, a 70-yearold male patient and a pediatric patient. The CT images of the patients were analysed with ImageJ program. The contrast enhancement when using the Lipiodol contrast enhancing agent was excellent, about 300-400 HU with 120 kV tube voltage. The suitable mixing ratio of the Lipiodol–oil mixture was estimated to be 1:8 with the adult patient and 1:10 with the pediatric patient. The leakage point could not be localized from the CT images. Reason for that might have been the image timing with the pediatric patient. She might have an exceptional rate of fat metabolism, because of her inborn malformations of the intestines. The biokinetical variations and variability of the mixing ratio in the different parts of the intestines also increase the uncertainty of the results of the study. The basic principle of Lipiodol as the contrast enhancing agent has now been tested and the properties of it have been found suitable for similar visualization examinations. The protocol for chylous leakage imaging can be developed and tested further when suitable patients will appear.
  • Molander, Andreas (2020)
    The Standard Model (SM) is the best established theory describing the observed matter and its interactions through all the fundamental forces except gravity. The SM is however not complete. For example, it does not explain the large difference between the electroweak scale and the Planck scale, which is known as the hierarchy problem, nor does it explain dark matter. Therefore there is a need for more comprehensive theories beyond the SM. Supersymmetry (SUSY) extends the SM with predictions of a partner particle (sparticle) for each currently known elementary particle. A few of its benefits are that it gives an explanation to the hierarchy problem and predicts the existence of a good particle candidate for dark matter. However, there is no experimental evidence for SUSY so far. The search for SUSY particles is currently on-going at the experiments using the Large Hadron Collider (LHC) at CERN. So far, the searches have been focusing on strongly interacting supersymmetric particles, still without findings. One of the parameter ranges still to be covered, is the compressed mass scenario in the lower mass end for weakly interacting sparticles, where the masses of the lightest and second lightest supersymmetric particle do not differ much in mass. If they exist, low mass SUSY particles could be created in the LHC from two fusing photons emitted by forward-scattered protons. In such two-photon (central exclusive) processes, both protons might remain on-shell and continue their path down the beamline. Central exclusive processes are rather rare, so to advance the study of these events, new tagging techniques are required to record as many of these events as possible. We are interested in the kinematic range with a mass difference of less than 60 GeV between the slepton and the neutralino, which are the supersymmetric partners of the lepton and the neutral bosons. The CMS detector in the LHC has two event filtering (trigger) systems; the low level (L1) trigger and the high level trigger (HLT). A study has been conducted on how a specific HLT could increase the number of recorded events for the previously mentioned process, without significantly increasing the total HLT rate. To select more events, the transverse momentum threshold value of the produced leptons ought to be lowered. The forward-scattered protons will be detected by the Precision Proton Spectrometer (PPS). This thesis shows that requiring proton tracks in the PPS tracking detectors and tuning the multiplicity cut of these, will compensate for the lowering of the transverse momentum threshold, keeping the overall HLT rate sensible, while still enabling more interesting physics to be recorded.
  • Haapasalmi, Risto (2020)
    In recent years highly compact succinct text indexes developed in bioinformatics have spread to the domain of natural language processing, in particular n-gram indexing. One line of research has been to utilize compressed suffix trees as both the text index and the language model. Compressed suffix trees have several favourable properties for compressing n-gram strings and associated satellite data while allowing for both fast access and fast computation of the language model probabilities over the text. When it comes to count based n-gram language models and especially to low-order n-gram models, the Kneser-Ney language model has long been de facto industry standard. Shareghi et al. showed how to utilize a compressed suffix tree to build a highly compact index that is competitive with state-of-the-art language models in space. In addition, they showed how the index can work as a language model and allows computing modified Kneser-Ney probabilities straight from the data structure. This thesis analyzes and extends the works of Shareghi et al. in building a compressed suffix tree based modified Kneser-Ney language model. We explain their solution and present three attempts to improve the approach. Out of the three experiments, one performed far worse than the original approach, but two showed minor gains in time with no real loss in space.
  • Paavonen, Aleksi (2024)
    The ever-changing world of e-commerce prompted the case company to develop a new improved online store for its business functions, which prompted the need to also understand relevant metrics. The aim of the research is to find the customer behaviour metrics that have explanatory power for the response variable, which is the count of transactions. Examining these key metrics provide an opportunity to create a sustainable foundation for future analytics. Based on the results the case company can develop analytics, as well as understand the weaknesses and strengths of the online store. The data is from Google Analytics service and each variable receives a daily value, but the data is not treated as time series. The response variable is not normally distributed, so a linear model was not suitable. Instead, the natural choice was generalized linear models as they can also accommodate non-normally distributed response variables. Two different models were fitted, Poisson distributed, and Gamma distributed. The models were compared in many ways, but no clear difference between the models performance was found, so the results were combined from both models. The results provided by the models were quite similar, but there were differences. For this reason, the explanatory variables were divided into three categories: key variables, variables with differing results, and non-significant variables. Key variables have explanatory power for the response variable, and the results of the models were consistent. For variables with differing results, the results of the models were different, and for non-significant variables, there was no explanatory power for the response variable. This categorization facilitates understanding of the results. In total 6 explanatory variables were categorized as key variables, one as mixed result variable and two as non-significant. In conclusion it matters which variables are tracked if the efficiency of the web store is developed based on the efficiency of transactions.
  • Mukkula, Olli (2024)
    Quantum computers utilize qubits to store and process quantum information. In superconducting quantum computers, qubits are implemented as quantum superconducting resonant circuits. The circuits are operated only at the two energy states, which form the computational basis for the qubit. To suppress leakage to uncomputational states, superconducting qubits are designed to be anharmonic oscillators, which is achieved using one or more Josephson junctions, a nonlinear superconducting element. One of the main challenges in developing quantum computers is minimizing the decoherence caused by environmental noise. Decoherence is characterized by two coherence times, T1 for depolarization processes and T2 for dephasing. This thesis reviews and investigates the decoherence properties of superconducting qubits. The main goal of the thesis is to analyze the tradeoff between anharmonicity and dephasing in a qubit unimon. Recently developed unimon incorporates a single Josephson junction shunted by a linear inductor and a capacitor. Unimon is tunable by external magnetic flux, and at the half flux quantum bias, the Josephson energy is partially canceled by the inductive energy, allowing unimon to have relatively high anharmonicity while remaining fully protected against low-frequency charge noise. In addition, at the sweet spot with respect to the magnetic flux, unimon becomes immune to first-order perturbations in the flux. The sweet spot, however, is relatively narrow, making unimon susceptible to dephasing through the quadratic coupling to the flux noise. In the first chapter of this thesis, we present a comprehensive look into the basic theory of superconducting qubits, starting with two-state quantum systems, followed by superconductivity and superconducting circuit elements, and finally combining these two by introducing circuit quantum electrodynamics (cQED), a framework for building superconducting qubits. We follow with a theoretical discussion of decoherence in two-state quantum systems, described by the Bloch-Redfield formalism. We continue the discussion by estimating decoherence using perturbation theory, with special care put into the dephasing due to the low-frequency 1/f noise. Finally, we review the theoretical model of unimon, which is used in the numerical analysis. As a main result of this thesis, we suggest a design parameter regime for unimon, which gives the best ratio between anharmonicity and T2.
  • Heino, Mikael (2021)
    Flash memory has become the industry standard for data storage for embedded solutions. Flash allows for fast read/write speeds that are magnitudes above spinning hard drives. Flash also takes up less physical space and has a lower risk for mechanical failures and no chance for impact shock. Utilizing flash correctly requires the acknowledgement of its limitations and minimizing their effects. The most critical limitation is the memory wearing after repeated writes. Our work contributes to limiting the memory wearing on embedded devices during the over-the-air software update process. In this work we use a Keenetic Ultra II router as the sample embedded environment. The router is a typical embedded system, as it runs a modified Linux operating system on special hardware with less available resources compared to a desktop system. The root file system of the firmware also provides us with a realistic sample set of content inside an embedded device. We implement a framework for processing two root file systems and generating a delta based update between them. We use differential encoding to create binary delta files for the changes between the file systems. There are several differential encoding algorithms available and in this work we use Bsdiff and Xdelta and evaluate them. As a side effect to creating smaller updates that would use less flash program erase cycles when updating, the smaller updates also reduce the bandwidth required to ship the updates onto the device.
  • Kåll, Simon (2014)
    Over the last ten years MapReduce has emerged as one of the staples of distributed computing both in small and large scale applications. MapReduce has successfully been employed to perform batch parallel computing applications such as web indexing and data mining. Especially Hadoop, an open source implementation of the MapReduce model has become widely adopted and researched. In MapReduce the input data typically consists of a long list of key/- values pairs which has been split up into smaller parts and stored in the cluster performing the computation. The computation consists of two distinct steps, map and reduce. In the map step nodes are assigned input splits, which they process by applying the user supplied map function to each element of the designated part of the list. The result of the Map step is a new intermediate list of key/value pairs which constitutes the input for the reduce step. In the reduce step, a user supplied reduce function is applied to the intermediate data. The reduce function performs a summary operation on the elements in the intermediate data list, the result of which is the output for the MapReduce job. The performance of a MapReduce implementation is closely tied to its scheduler algorithm. The scheduler decides when and on which node the map and reduce tasks of the computation are executed in the cluster. The implementation of the scheduler in Hadoop and other systems relies on the underlying cluster being relatively homogenous with task progressing in a linear fashion. Experience has however shown that this is rarely the case. Differing hardware generations, faults in both hardware and software as well as varying workloads all contribute to make the environment MapReduce runs in far from homogeneous. In this thesis the performance of nodes executing reduce tasks is shown to strongly correlate with the run-time of the MapReduce job. This correlation is utilized to improve performance in an heterogeneous environment though a reduce delay scheduling algorithm. The algorithm schedules reduce tasks based on historic node performance in order to minimize the likelihood of reduce tasks being executed on poorly performing nodes. In the best case scenario the algorithm improves performance under heterogeneity, and even in the worst case minimizes the effect of heterogeneity. This thesis demonstrates how with heterogeneity modeled as a normal distribution of node performance, reduce delay scheduling decreases MapReduce job run-times with up to 30% when compared to a homogeneous model of node performance.
  • Andersson, Markus (2023)
    Using password hashes for verification is a common way to secure users’ passwords against a potential data breach. The functions that are used to create these hashes have evolved and changed over time. Hackers and security researchers constantly try to find effective ways to derive the original passwords from these hashes. This thesis focuses on cryptographic hash functions that get passwords as inputs and on the different methods an attacker may use to deduce a password from a hash. The research questions for the thesis are: 1. What kind of password hashing techniques have evolved from the viewpoints of a defender and an attacker? 2. What kind of observations can be made when studying the implementations of the hashing algorithms and the tools that the attackers use against the hashes? The thesis examines some commonly used hash functions for passwords and common attack strategies that are used against them. Hash functions developed especially for passwords such as PBKDF2 and Scrypt will be explained. The password recovery tool Hashcat is introduced and different ways to use the tool against password hashes are demonstrated. Tests are done to show off differences in hash functions, as well as what kind of effect offensive and defensive techniques have against password hashes. These test results are explained and reviewed.
  • Moen, Emma (2024)
    Seismic modelling was conducted to investigate the extent to which seismic methods, and vertical seismic profiling (VSP) in particular, can be used to image steeply dipping faults and fractures in a crystalline bedrock environment typical of southern Finland. Modelling is based on the geology and subsurface geometry found in Kopparnäs, Inkoo, where a steeply dipping fault zone is intersected by a borehole. The goal of modelling was to design the optimal survey for seismic data acquisition to image the fault zone. Borehole geophysical data analysis and computing of 1D synthetic seismograms gave a first insight into the expected response of the subsurface structures. Simple travel-time modelling was used to define the time-window for direct and reflected waves as well as gaining some understanding of useful source positions, based on the separation of direct and reflected waves. To assess the compatibility of distributed acoustic sensing (DAS) in this setting, a modelling software for comparing the response of geophones with that of a fiber optic cable / DAS was used. For more accurate modelling of the propagating wavefield, a finite-difference based full waveform modelling scheme was used to create shot gathers for both acoustic and elastic wave propagation through a 2D model. The raw shot gathers were first briefly analysed before further processing. Using a common VSP processing sequence resulted in migrated and stacked shot gathers to determine the optimal source positions. High frequencies are needed for imaging the subsurface structures in Kopparnäs, largely due to the high velocities of the crystalline bedrock and the shallow geometry of the fault. It was found that a high-resolution image of the upper part of the fault can be obtained using only four shot points located on the south side of the borehole collar, away from the fault. Shear wave reflections provided the best image of the fault, even with noise added to the shot record. The feasibility of using DAS for data acquisition was evaluated, and due to the imaging ability comparable to geophones, this method can be suggested for data acquisition in Kopparnäs. Further modelling can be conducted if desired, but good imaging results should be obtained if the suggested survey geometry is used. The practical and financial benefits of using DAS technology for data acquisition could enable some testing in the field, reducing the need for additional modelling.
  • Toivonen, Kim (2022)
    Browser based 3D applications have become more popular since the introduction of the Web Graphics Library (WebGL). However, they have some unique characteristics, such as the inability to access the local file system and the requirement to be executed in the browser’s scripting environment. These characteristics can introduce performance bottlenecks, and WebGL applications are also vulnerable to the same bottlenecks as traditional 3D applications. In this thesis, we aim to provide guidelines for designing WebGL applications by conducting a background survey and creating a benchmarking platform. Our experiments showed that loading model data from the browser’s execution environment to the GPU has the biggest impact on performance. Therefore, we recommend focusing on minimizing the amount of data that needs to be added to the scene when designing 3D WebGL applications. Additionally, we found that the amount of data rendered affects the severity of performance drops when loading model data to the GPU, and suggest actively managing the scene by only including relevant data in the rendering pipeline.
  • Kauria-Kojo, Minna (2017)
    Pohjoinen sähkömarkkina-alue, Nord Pool, toimii usein esimerkkinä sähkömarkkinasta, joka on sääntelystä vapaa ja vastaa teorian näkökulmasta useita muita hyödykemarkkinoita, joissa hinnat määräytyvät kysynnän ja tarjonnan mukaan. Pohjoinen sähkömarkkina jaetaan fyysiseen markkinaan ja finanssimarkkinaan, joista finanssimarkkinalla käydään kauppaa sähkön hintaan liittyvillä rahoitusinstrumenteilla. Noteeratut instrumentit on listattu NASDAQ OMX Commodities-markkinapaikalla. Tässä työssä keskitytään noteeratuista instrumenteista optioiden hinnoitteluun analyyttisten menetelmien avulla. Vaihtoehtoinen lähestymistapa olisi voinut olla esimerkiksi numeeristen menetelmien käyttö sähköoptioiden hinnoittelussa, mutta numeeriset menetelmät on rajattu tämän työn ulkopuolelle. NASDAQ OMX Commodities-markkinapaikalla optioiden alla olevina arvopapereina ovat sähköfutuurit. Tässä työssä käydään läpi sähköfutuurien dynamiikan mallintaminen ja tämän jälkeen siirrytään Mertonin-Blackin-Scholesin mallin käyttöön optioiden hinnoittelussa. Optioiden hinnoittelun yhteydessä tullaan tekemään oletukset markkinoiden arbitraasivapaudesta ja täydellisyydestä, kun ajatellaan, että futuureja on saatavilla jokaiselle ajanhetkelle. Tutkielma on jaettu viiteen lukuun seuraavasti. Ensimmäinen luku on johdanto, jossa esitellään työn tausta, motivaatio, tavoitteet ja työn rajaus. Lisäksi käydään läpi tutkielman rakenne tiivistelmää tarkemmin. Toisessa luvussa kuvataan yleisesti Pohjoista sähkömarkkinaa ja sen rakennetta. Sähkön finanssimarkkinalla tarjolla olevista instrumenteista forwardit, futuurit, aluehintatuotteet ja optiot esitellään tässä luvussa. Seuraavassa eli kolmannessa luvussa muodostetaan käsitystä johdannaisten hinnoittelun taustalla olevasta teoreettisesta viitekehyksestä. Omiksi kappaleikseen on eriytetty Brownin liike, Lèvy-prosessi, Itôn kaava ja mitan vaihto ja mitan vaihtoon liittyen erityisesti Esscherin ja Girsanovin muunnos. Lisäksi kolmannessa luvussa käydään läpi teoreettinen markkina-asetelma. Neljäs luku esittelee matemaattisten työkalujen käyttöä Pohjoisella sähkömarkkinalla. Kyseisessä luvussa keskitytään futuurisopimusten hinnan dynamiikkaan ja hinnoitteluun ja tämän jälkeen siirrytään optioiden hinnoitteluun Merton-Black-Scholes-mallin avulla. Viimeinen eli viides luku on varattu pohdinnalle.
  • Ikonen, Jani (2020)
    At the literary review, basic concepts of proteomics and mass spectrometry were covered. Different data-collection methods (DDA and DIA) were compared with each other including exploration of the possibilities of the DIA method. Characteristics of Fourier transformation mass spectrometry were discussed in detail beginning from the production of the protein spectra in FTMS instruments including features of the Orbitrap (hybrid) mass spectrometer. Features included modes of measurements, working principle, performance characteristics, operation modes and top-down experiments including large intact protein analysis (m/z range > 6000 Da). The working principles and performance in proteomic analyses of other mass spectrometer instruments were also briefly covered. Orbitrap MS instrumentation is compared with high-performance mass spectrometers including triple quadrupole, time of flight, ion trap, and Fourier transform ion cyclotron resonance (FTICR) mass spectrometers. Lastly, operation and coupling of the LC instrumentation to the Orbitrap mass spectrometer were also briefly discussed. The experimental part of the thesis covers development and feasibility testing of a quality control method for protein analysis studied with PierceTM Intact Protein Standard Mix by using microflow liquid chromatography-Orbitrap mass spectrometry combination. Development and testing of the method includes optimization of the method for dried sample, robustness testing with variable LC eluent concentrations, and the method performance with a heavily contaminated instrument compared with the performance of a clean MS instrument. Tested heavily contaminated instrument had more than 2000 injections of protein samples without cleaning. In the end, the developed protein analysis method was tested with nine different Q Exactive HF Orbitrap instruments to measure the instrument variation. In the studies, the average mass of analyzed proteins varied from 9111.47 to 68001.15 kDa The mass range used for identification was 500 – 2000 Da.
  • Kokko, Sini-Maaria (2013)
    The Cu-Rautuvaara iron-oxide copper gold (IOCG) deposit is located in the western part of the Central Lapland Greenstone belt. Cu-Rautuvaara and several other IOCG deposits are located next or within of the SSW-NNE trending Kolari-Pajala Shear Zone. Deposits are formed in the contacts of the ca. 1860 Ma Haparanda Suites monzonite-diorite intrusions and the >2050 Ma Savukoski Group supracrustal rocks. At Cu-Rautuvaara deposit Cu-Au ore is hosted by magnetite-disseminated albitite. Host rocks for the other deposits in the Kolari area are skarns and ironstones. The wall and the host rocks are hydrothermally altered. Alterations can be roughly divided to proximal and distal zones which are displaying widespread and irregular alteration zones. The distal alteration is characterized by albite ± biotite ± K-feldspar and the proximal alteration albite + magnetite ± phlogopite ± gedrite ± amphibolite ± sulphides ± clinopyroxene. Based on the mass balance calculations Zr, TiO2 and Al2O3 were immobile during the alteration. Calculations indicated that main gains for albitites were Na2O, Fe2O3, Cu and Ni. For diorites and metavolcanites gains were CaO, K2O, Ce and Ba. Albitite protolith determination by its texture and mineral assemblage is unreliable because of the intense hydrothermal alteration. Based on immobile element correlation data suggest that metavolcanites are the most preferential protolith for albitites. The main oxide in the deposits is magnetite, which locally has accessory ilmenite exsolved from it. The main sulphides are chalcopyrite, pyrite and pyrrhotite. Oxides and sulphides element contents were determined by electron microprobe analyses. Based on analyses Au occurs as native Au-Ag grains in chalcopyrite. Major trace element in pyrrhotite is Ni (0.1– 0.8 wt. %). Based on major trace elements there are two types of pyrites with Co (<3 wt. %) and Ni (<3.5 wt. %). Major trace elements in magnetites are Al2O3 (0.01 – 0.71 wt. %), V2O (<0.23 wt. %) and Cr2O3 (<0.27 wt. %). Some magnetites have also MnO (<0.18 wt. %) as trace element. The trace element contents and ore mineral textures suggest that there were two stages of mineralization events for both sulphides and oxides.
  • Karpoja, Anna (2019)
    Kaapelinkulma is an orogenic gold deposit located in the Paleoproterozoic Vammala Migmatite Belt (1.91 – 1.79 Ma) in Valkeakoski municipality in Southwestern Finland, and it is considered to have been formed in microcontinent collision during Svecofennian orogeny and has been classified as an orogenic gold deposit. Kaapelinkulma comprises a set of sub-parallel lodes in a tight array hosted within a sheared quartz-diorite unit inside a tonalitic intrusion. Gold occurrence is hosted by an en echelon type sheared quartz-dioritic dyke which forms a large xenolith inside synorogenic tonalite intrusion, surrounded by mica gneiss. It is estimated that Kaapelinkulma gold deposit contains at least, 168 Kt of ore containing 3.8 g/t Au. Textural setting, mineralogical association form and assemblage of gold, sulphides and telluride grains in Kaapelinkulma were studied with field-emission scanning electron microscopy, with electron probe microanalyzer and scanning electron microscopy. Ore minerals observed in Kaapelinkulma are: arsenopyrite, löllingite, pyrrhotite, pyrite and chalcopyrite. Other ore minerals identified are native bismuth, gold, scheelite, bismuth-tellurides and maldonite, which were all found in abundant amounts. Ore minerals occur as dissemination in intergranular spaces between silicate matrix, as polycrystal aggregates in quartz-veins and quartz clusters; and within shear zones. Gold in Kaapelinkulma is present as two generations: as single free native gold grains and as polycrystalline gold aggregates. Polycrystalline gold aggregates are grains formed from several mineral association and their combinations. Most common polycrystalline gold aggregates are formed from combination of: maldonite-native Au, Au-Bi alloys, Au-Ag grains and Au-hedleyite. Single free native gold grains are pure gold or gold-silver alloys. Free native gold grains can be found as intergranular, single grains in silicate matrix and adjacent or as a part of disseminated ore together with polycrystalline gold aggregates, bismuth and bismuth tellurides. Polycrystalline gold aggregates are found in disseminated ore, which are in close contact with quartz-veins and sulphide aggregates, or as inclusions in arsenopyrite-löllingite contact zones- or in other sulphides. Concentration of Au varies in native-gold grain from 76.83 to 97.87 wt% according to EPMA analyzes and from 50.03 to 100 wt% according to FE-SEM analyzes. Minor to moderate amounts of silver and copper were identified within the grains. Grain sizes of gold varies significantly from 7µm2 to 5mm2. Ore mineral paragenesis were observed to start when arsenopyrite and löllingite crystallized first, followed, partly simultaneously by pyrrhotite, pyrite and chalcopyrite. This was followed by crystallization of maldonite, first occurrence of native gold and bismuth, bismuth-tellurides, hedleyite and finally tellurides and main occurrence of gold. General ore forming process in Kaapelinkulma has been open space filling.
  • Salmi, Rebekka (2023)
    Global warming and anthropogenic activity will change the environmental conditions in the northern regions. For example, precipitation and river flow are expected to increase, the amount of organic matter ending up in the sea from land will increase, and its quality will change. The impact of changes in organic matter on northern coastal ecosystems and the carbon cycle is poorly known and these impacts need to be studied. In this study, the amount, quality and variations of organic matter accumulated in the surface sediments of the Bothnian Bay coastal areas in the northern part of the Baltic Sea and in the Liminka Bay over the past 100 years are studied by analyzing the concentrations of organic carbon and nitrogen (TOC and TN), C/N ratio, and the stable isotope ratios δ13C and δ15N, thus assessing environmental change in the coastal area of the Bothnian Sea. The accumulation of organic matter along the coast of Bothnian Bay is affected by both the proximity of the rivers and the land cover and land use of the river basin. More organic matter accumulates on the coasts (average 3.5 wt%) than further into the open sea (average 1.9 wt%). Contrary to presuppositions, there is no clear variation in the quality of organic matter between the coast and the open sea, but the observed change is north-south: in the northern areas, organic matter is more terrestrial and autochthonous, and in the southern areas it is more aquatic and allochthonous. The northern regions are characterized by large rivers with large amounts of forests and peatlands in the catchment areas. Further south, the rivers are smaller and carry less organic matter in quantity. Further north in the coastal ecosystem, the amount of primary production is lower and nitrogen does not limit primary production, as opposed to more southern areas. Primary production of ice may also have affected the organic matter deposited in the Bothnian Sea sediments. The amount of organic matter deposited in Liminka Bay has been on the rise over the past century, probably due to global warming, increased river flow and the impact of human activity. Based on the C/N ratio, the material has been more terrestrial in the 1930s to 1970s, after which the material has become more aquatic. In addition, aquatic primary production has increased in the Liminka Bay and nitrogen has begun to restrict primary production more. The study shows that climate and environmental change and human activities affect the amount and quality of organic matter in northern coastal areas, but further research is needed to determine more accurate ecosystem impacts.
  • Laine, Eero-Veikko (2018)
    Internal startups are new ventures launched by companies seeking ways for radical innovation. Conceptually part of Lean Startup, they are strongly influenced by independent startup companies and agile software methodology. Internal startups favor a "lean" approach to their organization, usually have minimal resources and are expected to produce results fast. This thesis explores how to organize testing effectively in the difficult conditions internal startups operate in. We conducted a case study where we interviewed five IT professionals associated with an internal startup in a global IT service and software company. To systematically analyze the material collected in the interviews, we applied thematic synthesis. Our results suggest that the organization of testing in internal startups is affected by the resources provided by the startup's parent company, as well as the demands presented by the company. Our results also suggest that the lean approach favored by internal startups can have a negative effect on testing and product quality. These results are supported by the existing literature on the subject. ACM Classification: • Software and its engineering~Software testing and debugging • Social and professional topics~Project and people management
  • Laitinen, Niko (2020)
    The nature of developing digital products and services has changed to adjust to the emergent markets that change fast and are difficult to predict. This has prioritized time-to-market and customer validation. The increase in expectations and complexity of digital products has caused organizations to transform to better operate in the present market space. As a consequence the demand for user experience design in digital product development has grown. Design in this context is defined as a plan or specification for the construction of software applications or systems or for the implementation of an activity or process, or the result of that plan or specification in the form of a prototype, product or process. New ways of organizing design work are needed to adjust to the evolving organizations and end-customer markets. In this case study digital product design was examined as a craft, a process and methods of defining, creating and validating digital products for consumer markets. The significant adoption of Lean-Agile software methodologies has successfully spread to organizations in response to the changed market space. Yet incorporating these methodologies into digital design have not yet reached maturity. Results from extensive studies have shown that successfully applying Lean-Agile methodologies can improve the quality of user experience and team productivity. These results have increased the integration of user experience practices into Lean-Agile methodologies. Successfully integrating Lean-Agile development and design have been proven to have immense effects on business growth on a large scale. This has largely been due to increased customer engagement using user-centered design approaches. This thesis investigates how Lean-Agile methodologies, user-centered design, Lean UX, Design Thinking, Lean Startup Method, Agile Software Development, DesignOps and Design Systems could be incorporated into the design process in digital product development to improve the impact, efficiency and quality of design work outcomes. A case study was conducted to prove the benefits of using Lean-Agile methodologies in the development of customer facing digital products and services. The design organization was examined with participatory and action research to establish a model of design operations in the community of practice, a group of people who share design as a discipline. This constructed model is then evaluated against a DesignOps model constructed from applicable literature. The participation allowed use of operations management methods which aimed to make the design process rapid, robust and scalable. These operations are later referred to as DesignOps (design operations). Quantitative and qualitative research methods were used to gather data. Qualitative methods included a team survey and an analysis of business metrics including the effects on business value and growth. Quantitative methods included discussion, interviews and observation in the design organization. The sustainability of the design practice was also evaluated. In addition the design organization put effort into building a company wide Design System and the benefits and considerations for building one are examined. The results indicated that the design practice at the studied company was still low in maturity. Especially DesignOps methods were noticeably beneficial. Efficiency in digital product development and increase in employee satisfaction could be identified from establishing design practices with DesignOps. Beyond these the establishment of a design organization with DesignOps enabled continuous improvement inside the company. Lean UX showed promising signs for establishing highly functioning digital product development teams that can efficiently produce design work in a sustainable manner. Design Systems spread knowledge and cultivate collaboration across disciplines which grows the impact of design inside the company. Establishing a design practice for continuous learning can increase market value, generate growth and improve employee and customer satisfaction.
  • Salo, Vili-Taneli (2018)
    Tämä tutkielma käsittelee siirtovedytystä, eli vedyn additioreaktoita kohdemolekyyliin siten, että vedyn lähteenä on jokin muu kuin molekulaarinen vety. Siirtovedytystutkimuksen suurimpana motiivina on korvata molekulaarinen vety sen aiheuttamien turvallisuusriskien vuoksi. Tutkielman kirjallisuuskatsaus painottuu siirtovedytysreaktioiden mekanismeihin ja erilaisten siirtovedytyskatalyyttien rakenteisiin ja katalyyttisiin aktiivisuuksiin. Tutkielmassa esitetään, miten lähtöaineiden ja katalyyttien erilaiset ominaisuudet vaikuttavat siihen, miten siirtovedytysreaktiot toimivat. Erityisen tärkeänä korostuu erilaisten ligandien ja metallien vaikutus katalyyttisten keskusten elektronisiin ominaisuuksiin ja katalyytin toimintamekanismiin. Siirtovedytyskemian mekanistinen ymmärrys antaa mahdollisuuden niin parempien siirtovedytyskatalyyttien valmistamiseen kuin muidenkin katalyyttien kehittämiseen, koska samantyyppisiä siirtymätiloja, kompleksirakenteita ja ligandeja tavataan myös muissa kemiallisissa prosesseissa. Tutkielman kokeellisessa osuudessa tutkittiin mahdollisuutta käyttää Wilkinsonin katalyyttiä orgaanisten superemästen kanssa siirtovedytyskatalyyttinä ketonien pelkistyksessä sekundäärisiksi alkoholeiksi. Siirtovedytysreaktioiden seuraamiseen käytettiin in-situ IR-spektrometriä, jonka avulla pystyttiin optimoimaan reaktioajat jokaiselle käytetylle lähtöaineelle. Menetelmä soveltuu erityisesti sellaisten reaktioiden seuraamiseen, joissa jokin tietty funktionaalisuus lisääntyy tai vähenee reaktion edetessä, edellyttäen että funktionaalisuudella on vahva spesifinen signaali IR-alueella. Tutkimuksessa osoitettiin, että tällä katalyyttisysteemillä on mahdollista tehdä ketonien siirtovedytystä, ja että katalyytti on sopivien lähtöaineiden siirtovedytyksessä erittäin aktiivinen.