Browsing by department "none"
Now showing items 1-20 of 271
-
(2020)Due to its exceptional thermal properties and irradiation resistance, tungsten is the material of choice for critical plasma-facing components in many leading thermonuclear fusion projects. Owing to the natural retention of hydrogen isotopes in materials such as tungsten, the safety of a fusion device depends heavily on the inventory of radioactive tritium in its plasma-facing components. The proposed methods of tritium removal typically include thermal treatment of massive metal structures for prolonged timescales. A novel way to either shorten the treatment times or lower the required temperatures is based performing the removal under an H-2 atmosphere, effectively exchanging the trapped tritium for non-radioactive protium. In this thesis, we employ molecular dynamics simulations to study the mechanism of hydrogen isotope exchange in vacancy, dislocation and grain boundary type defects in tungsten. By comparing the results to simulations of purely diffusion-based tritium removal methods, we establish that hydrogen isotope exchange indeed facilitates faster removal of tritium for all studied defect types at temperatures of 500 K and above. The fastest removal, when normalising based on the initial occupation of the defect, is shown to occur in vacancies and the slowest in grain boundaries. Through an atom level study of the mechanism, we are able to verify that tritium removal using isotope exchange depends on keeping the defect saturated with hydrogen. This study also works to show that molecular dynamics indeed is a valid tool for studying tritium removal and isotope exchange in general. Using small system sizes and spatially-parallelised simulation tools, we have managed to model isotope exchange for timescales extending from hundreds of nanoseconds up to several microseconds.
-
(2020)Prediction of the pathological T-stage (pT) in men undergoing radical prostatectomy (RP) is crucial for disease management as curative treatment is most likely when prostate cancer (PCa) is organ-confined (OC). Although multiparametric magnetic resonance imaging (MRI) has been shown to predict pT findings and the risk of biochemical recurrence (BCR), none of the currently used nomograms allow the inclusion of MRI variables. This study aims to assess the possible added benefit of MRI when compared to the Memorial Sloan Kettering, Partin table and CAPRA nomograms and a model built from available preoperative clinical variables. Logistic regression is used to assess the added benefit of MRI in the prediction of non-OC disease and Kaplan-Meier survival curves and Cox proportional hazards in the prediction of BCR. For the prediction of non-OC disease, all models with the MRI variables had significantly higher discrimination and net benefit than the models without the MRI variables. For the prediction of BCR, MRI prediction of non-OC disease separated the high-risk group of all nomograms into two groups with significantly different survival curves but in the Cox proportional hazards models the variable was not significantly associated with BCR. Based on the results, it can be concluded that MRI does offer added value to predicting non-OC disease and BCR, although the results for BCR are not as clear as for non-OC disease.
-
Airbnb:n mahdolliset vaikutukset Helsingin vuokra-asuntomarkkinoihin : vuokra-aukkoteorian näkökulma (2020)Short-term rental platforms have become widely popular in recent years, but their growth has left cities to face a variety of problems. Studies have shown, for example, that short-term rentals have led to an increase in long-term rental prices. When more and more apartments are used as short-term rentals, the supply of long-term rentals decreases. This causes more pressure on the long-term rental market and leads to increased prices. In this study I examine the possible impacts of Airbnb, the biggest and most popular short-term rental platform, on the rental market of Helsinki. First, I analyse how many apartments have been removed from the long-term rental market to the Airbnb market, and second, how likely it is that the number of Airbnb rentals rises in the future. Presumably, renting through Airbnb becomes more popular when the potential income from Airbnb rentals is larger than from long-term rentals. In Neil Smith’s terms, this difference between actual and potential rental income constitutes a rent gap. Therefore, I also analyse whether renting short-term in Helsinki is more profitable than renting long-term. In addition, I discuss the current city and tourism policies of the city of Helsinki in the light of the results of the above research questions and give recommendations on issues to be taken into account in the future. This study uses AirDNA’s data of Airbnb rentals in Helsinki and long-term rental price data from KTI Property Information Ltd, and it focuses on data from year 2019. Data analysis is conducted using statistical and geospatial methods. The results of this study show that in 2019 there were a significant number of professional Airbnb rentals in Helsinki, 863 in total. However, their number varied substantially between the districts of Helsinki. There was a large amount of professional Airbnb rentals especially in the city centre and Kallio area. On the scale of the whole of Helsinki, professional Airbnb rentals comprise approximately 0,5 % of all rental apartments whereas in some districts in the city centre the percentage was considerably higher, in the Kamppi district as much as 3,9 %. Based on the results, the number of Airbnb rentals will likely grow in the future because Airbnb rental income was, on average, greater than long-term rental income in each of the study areas. In most areas, the rent gap was substantial. However, the size of the rent gap varied significantly, between 50 and 1350 euros, based on the location and amount of rooms of the apartment. Some policy recommendations can be made based on the results of this study. When designing future policies, it is important to acknowledge that a sizeable part of the Airbnb rentals in Helsinki is professional and that the number of professional rentals will probably continue to increase. Since the number of professional rentals is still quite small on the city level, the impacts of short-term rentals in Helsinki are presumably not yet significant. Nevertheless, in the future problems can arise especially in the city centre and Kallio area, as these areas have a lot of professional Airbnb rentals. Since only professional Airbnb rentals are disadvantageous for the long-term rental market, assigning certain restrictions would be justifiable in order to prevent future problems and to promote sustainable tourism. Restricting Airbnb activity could be done by enforcing current regulations more rigorously or setting a yearly renting limit like many other European cities have done. This would help to inhibit activity that is against the current legislation and to support the real sharing economy.
-
(2019)Estimating the error level of models is an important task in machine learning. If the data used is independent and identically distributed, as is usually assumed, there exist standard methods to estimate the error level. However, if the data distribution changes, i.e., a phenomenon known as concept drift occurs, those methods may not work properly anymore. Most existing methods for detecting concept drift focus on the case in which the ground truth values are immediately known. In practice, that is often not the case. Even when the ground truth is unknown, a certain type of concept drift called virtual concept drift can be detected. In this thesis we present a method called drifter for estimating the error level of arbitrary regres- sion functions when the ground truth is not known. Concept drift detection is a straightforward application of error level estimation. Error level based concept drift detection can be more useful than traditional approaches based on direct distribution comparison, since only changes that affect the error level are detected. In this work we describe the drifter algorithm in detail, including its theoretical basis, and present an experimental evaluation of its performance in virtual concept drift detection on multiple datasets consisting of both synthetic and real-world datasets and multiple regression functions. Our experi- ments show that the drifter algorithm can be used to detect virtual concept drift with a reasonable accuracy.
-
(2019)This thesis presents a wavelet-based method for detecting moments of fast change in the textual contents of historical newspapers. The method works by generating time series of the relative frequencies of different words in the newspaper contents over time, and calculating their wavelet transforms. Wavelet transform is essentially a group of transformations describing the changes happening in the original time series at different time scales, and can therefore be used to pinpoint moments of fast change in the data. The produced wavelet transforms are then used to detect fast changes in word frequencies by examining products of multiple scales of the transform. The properties of the wavelet transform and the related multi-scale product are evaluated in relation to detecting various kinds of steps and spikes in different noise environments. The suitability of the method for analysing historical newspaper archives is examined using an example corpus consisting of 487 issues of Uusi Suometar from 1869–1918 and 250 issues of Wiipuri from 1893–1918. Two problematic features in the newspaper data, noise caused by OCR (optical character recognition) errors and uneven temporal distribution of the data, are identified and their effects on the results of the presented method are evaluated using synthetic data. Finally, the method is tested using the example corpus, and the results are examined briefly. The method is found to be adversely affected especially by the uneven temporal distribution of the newspaper data. Without additional processing, or improving the quality of the examined data, a significant amount of the detected steps are due to the noise in the data. Various ways of alleviating the effect are proposed, among other suggested improvements on the system.
-
(2020)Further proof of the unique morphologies of water-soluble poly(2-isopropyl-2-oxazoline)-block-poly(DL-lactide) and poly(2-isopropyl-2-oxazoline)-block-poly(L-lactide) (PiPOx-b-PDLLA and PiPOx-b-PLLA) nanoparticles was obtained via Fluorescence Spectroscopy. Additionally, loading and release studies were carried out with hydrophobic curcumin molecules to outline the potential of the amphiphilic block copolymers in drug delivery applications. To study the morphology of the nanoparticles, absorption and emission spectra of pyrene were measured in water dispersions of the nanoparticles at several concentrations. The obtained I1/I3, I337/I333.5 and partitioning constant (Kv) values were compared to corresponding data from a control core/shell nanoparticle poly(ethylene glycol)-block-poly(DL-lactide) (PEG-b-PDLLA). Of the three different amphiphilic polymers, PEG-b-PDLLA showed the smallest and PiPOx-b-PDLLA the highest Kv value. This indicates, that PiPOx-b-PDLLA core is less hydrophobic and looser compared to the dense cores of PEG-b-PDLLA and PiPOx-b-PLLA, making it capable of encapsulating the greatest amount of pyrene. In the loading and release studies, the nanoparticles were loaded with curcumin and placed in dialysis against PBS Tween® 80 solution. Curcumin content of the samples was monitored over a week by measuring the emission spectra of curcumin. PiPOx-b-PDLLA showed greater potential as a drug delivery agent: It formed more stable nanoparticles, showed higher loading capacities, higher encapsulation efficiencies and slower release rates. Flash nanoprecipitation method (FNP) was also used to prepare the same nanoparticles with and without encapsulated curcumin. In addition to the encapsulation efficiencies, sizes of the nanoparticles were determined via dynamic light scattering (DLS). PiPOx-b-PLLA forms the smallest nanoparticles with lowest encapsulation efficiencies, thus agreeing well with the higher density of PLLA core. All three investigated amphiphilic copolymers formed stable nanoparticles in water at room temperature. On the contrary, stability of the nanoparticles was found to be poor in saline solutions at body temperature. Mixing PEG-b-PDLLA with PiPOx-b-PLA in a ratio of 20:80 w-% increased the stability of the nanoparticles in physiological conditions simultaneously uncovering the thermoresponsive character of the PiPOx-blocks. Turbidity measurements of PEG-b-PDLLA mixed with PiPOx-b-PDLLA in ratio of 20:80 w-% showed slight decrease in transmittance at the 30 °C, which corresponds to the cloud point of PiPOx-b-PDLLA in PBS solution. However, it remains unclear, whether the increased stability is due to the PEG-b-PDLLA mixing in the same micelles with PiPOx-b-PDLLA, thus hindering the aggregation of the nanoparticles upon the cloud point of the PiPOx-blocks.
-
(2020)Social networks represent a public forum of discussion for various topics, some of them controversial. Twitter is such a social network; it acts as a public space where discourse occurs. In recent years the role of social networks in information spreading has increased. As have the fears regarding the increasingly polarised discourse on social networks, caused by the tendency of users to avoid exposure to opposing opinions, while increasingly interacting with only like-minded individuals. This work looks at controversial topics on Twitter, over a long period of time, through the prism of political polarisation. We use the daily interactions, and the underlying structure of the whole conversation, to create daily graphs that are then used to obtain daily graph embeddings. We estimate the political ideologies of the users that are represented in the graph embeddings. By using the political ideologies of users and the daily graph embeddings, we offer a series of methods that allow us to detect and analyse changes in the political polarisation of the conversation. This enables us to conclude that, during our analysed time period, the overall polarisation levels for our examined controversial topics have stagnated. We also explore the effects of topic-related controversial events on the conversation, thus revealing their short-term effect on the conversation as a whole. Additionally, the linkage between increased interest in a topic and the increase of political polarisation is explored. Our findings reveal that as the interest in the controversial topic increases, so does the political polarisation.
-
(2019)Tiivistelmä/Referat – Abstract This study investigates temperature data that Posiva Oy has from the Olkiluoto and ONKALO® sites. The aim of the study was to create a unifying data classification for the existing temperature measurements, give an estimate of the initial undisturbed bedrock temperature and temperature gradient and model the temperature profiles in 3D. The thermal related issues, which the repository will undergo once in operating are significant and have fundamental contribution to the evolution of the repository, creating a need in such a study. Posiva Oy has temperature data obtained with four main methods; Geophysical drillhole loggings, Posiva flow log (PFL) measurements, thermal properties (TERO) measurements and Antares measurements. The data classification was carried out by creating a platform of quality aspects affecting the measurements. The classification was then applied for all the available data by inspecting the measurement specifics of each configuration and by observing the temperature/depth profiles with WellCad software. According to the specifics of each individual measurement the data was classified into three groups: A= the best data, recommended for further use, and which fulfils all quality criteria, B= data that should be used with reservation and which only partly fulfils quality criteria, and C= unusable data. Only data that showed no major disturbance within the temperature/depth profile (class A or B) were used in this study. All the temperature/depth data was corrected to the true vertical depth. The initial undisturbed average temperature of Olkiluoto bedrock at the deposition depth of 412 m and the temperature gradient, according to the geophysical measurements, PFL measurements (without pumping), TERO measurements and Antares measurements were found to be 10.93 ± 0.09°C and 1.47°C/100m, 10.85 ± 0.02°C and 1.43°C/100m, 10.60 ± 0.08°C and 1.65°C/100m, and 10.75°C and 1.39°C/100m, respectively. The 3D layer models presented in this study were generated by using Leapfrog Geo software. From the model a 10.5 – 12°C temperature range was obtained for the deposition depth of 412 – 432 m. The models indicated clear temperature anomalies in the volume of the repository. These anomalies showed relationship between the location of the major brittle fault zones (BFZ) of Olkiluoto island. Not all observed anomalies could be explained by a possible cause. Uncertainties within the modelling phase should be taken into consideration in further interpretations. By combining an up-to-date geological model and hydraulic model of the area to the temperature models presented here, a better understanding of the temperature anomalies and a clearer over all understanding of the thermal conditions of the planned disposal location will be achieved. Based on this study a uniform classification improves the usability of data and leads into a better understanding of the possibilities and weaknesses within it. The initial bedrock temperature and the temperature gradient in Olkiluoto present thermally a relatively uniform formation. The estimates of the initial bedrock temperatures and the temperature gradient presented in this study, endorse previous estimates. Presenting the classified temperature data in 3D format generated good results in the light of thermal dimensioning of Olkiluoto by showing distinct relationships between previously created brittle fault zone (fracture zone) models. The views and opinions presented here are those of the author, and do not necessarily reflect the views of Posiva.
-
(2020)The analysis of volatile organics is growing by the year and there is a great interest in fast and simple sample preparation techniques. With solid phase micro-extraction, samples can be extracted non-destructively without a need for solvents. This is both cost effective and ecological, because even most eco-friendly solvents still cause strain on the environment. This thesis focused on studying the effect of extraction conditions on the extraction efficiency. The effect of different sorptive phase materials was tested as well. New single-step sample extraction and preparation method was developed for gas chromatographic mass spectrometric analysis. Three different sorptive phase materials were compared and the extraction conditions were optimized for each. The method developed was used to extract, analyze and determine unknown compounds from a butterfly specimen. Multiple extractions were performed from both headspace and with direct immersion. By progressively changing the extraction conditions, properties of the compounds such as volatility and polarity could be determined by their presence alone. Analysis was performed using with gas chromatography mass-spectrometer using electron ionization quadrupole mass detector in full scan mode.
-
(2020)In the world of constantly growing data masses the efficient extraction, saving and accessing that data for business intelligence and analytics has become increasingly important to businesses. Analytics and business intelligence software is offered by many providers in the market for all sizes of organizations and there are multiple ways to build an analytics system, or pipeline from scratch or integrated with tools available on the market. In this case study we explore and re-design the analytics pipeline solution of a medium sized software product company by utilizing the design science research methodology. We discuss the current technologies and tools on the market for business intelligence and analytics and consider how they fit into our case study context. As design science suggests, we design, implement and evaluate two prototypes of an analyt- ics pipeline with an Extract, Transform and Load (ETL) solution and data warehouse. The prototypes represent two different approaches to building an analytics pipeline - an in-house approach, and a partially outsourced approach. Our study brings out typical challenges similar businesses may face when designing and building their own business intelligence and analytics software. In our case we lean towards an analytics pipeline with an outsourced ETL process to be able to pass various different types of event data with a consistent data schema into our data warehouse with minimal maintenance work. However, we also show the value of near real time analytics with an in-house solution, and offer some ideas on how such a pipeline may be built.
-
(2019)Tämä työ tarkastelee kylmää jaksoa Pohjois-Euroopassa ja erityisesti Lapissa 1.1.2017 – 6.1.2017. Tarkastelujaksolla Sodankylässä mitattiin yli neljäkymmentä astetta pakkasta, jonka Euroopan keskipitkien ennusteiden keskuksen säämalli IFS ennusti pintalämpötilan yli kymmenen astetta liian korkeaksi. Kylmä jakso ylettyi aina Bulgariaan ja Kreikaan asti antaen viitteitä laajemmasta säähäiriöstä. Näistä lähtökohdista lähdin tutkimaan, mikäli lämpötilan yliennustuksen syy olisi laajemman synoptisen skaalan häiriön epätarkka ennustaminen. Työssä visualisoin IFS:n paine ja lämpötilakenttiä Euroopan keskuksen metview alustalla ja vertaan niitä synoptiseen analyysiin sekä pinta- ja luotaushavaintoihin Sodankylästä. Käytän pohjana Euroopan keskuksen omaa raporttia poikkeuksellisesta sääilmiöistä, joka kuitenkin keskittyy enemmän Kaakkois-Euroopan poikkeukselliseen kylmyyteen ja voimakkaisiin lumisateisiin. Työssä havaitaan, että IFS ennusti synoptisen skaalan matalapainejärjestelmien ja muiden säähäiriöiden synnyn ja liikkeet tarkastelujaksolla varsin hyvin. Syy pintalämpötilan yliennustamiseen ei arvioni mukaan johdu virtaustilanteen väärästä ennustamisesta, vaan mallin tavasta käsitellä pintalämpötilaa. Erittäin stabiileissa olosuhteissa oletukset, joiden perusteella mallin pintalämpötila lasketaan, eivät tuota järkevää tulosta. Luotauksista havaitaan, että Sodankylässä vallitsi voimakas pintainversio, jota malli ei kykene täysin mallintamaan johtuen tavasta, jolla se käsittelee pinnan ja alimman mallitason välistä kerrosta. Ennustettu lämpötila poikkeaa toteutuneesta kuitenkin niin voimakkaasti, että inversion mallintamiseen liittyvät ongelmat eivät välttämättä ole ainoa virhelähde. Lopuksi tarkastelen lyhyesti raportteja mallin ongelmista ennustaa pintalämpötilaa Suomen talviolosuhteisssa, sekä miten Euroopan keskipitkien säähavaintojen keskus on itse käsitellyt ongelmaa. Globaalimallina IFS on kalibroitu tuottamaan keskimäärin osuvin ennuste koko planeetalla, ja on tärkeä tietää ne rajatapaukset, joissa sen oletukset eivät ole päteviä.
-
(2020)Urban spatial planning is a cooperative mechanism in ethics which seeks to regulate how land is used, modified and arranged in order to sustain quasi-stable coexistences of dense populations with varied needs and values. Perhaps no needs and values are more varied than those of the many nonhuman animals which live alongside humans in urban spaces. Communicative planning theory (CPT) has emerged over the last 30 years to improve planning’s ethical content by navigating fuller and more diverse multi-interest, multi-stakeholder discourses. The perceived or real absence of significant human-nonhuman animal communications presents a problem for incorporating animals into communicative planning’s anthroponormative frameworks. This thesis adopts a socioecologically hybridized perspective to explore why and how animals may be conceived of as stakeholders in communicative planning, what values and practices produce human-nonhuman animal relationships, and how these translate to outcomes in spatial planning. Using theories which question the viability of the human-animal binary, especially actor network theory (ANT) and Callon’s sociology of translation, I develop my own relational perspective of urban communicative and spatial planning practice that may include nonhuman animals as part of urban spatial planning’s ‘decision-making spaces’. I use this approach in analysis of a spatial planning problem involving three species of nonhuman animals, the Jokeri Light Rail of Helsinki, Finland. From the case study I draw conclusions about how nonhuman animals relate, communicate and negotiate within spatial planning systems in fundamentally distinct ways requiring the development of new communicative apparatus and stakeholder engagement tools. In conclusion, I discuss the ways in which the animal-as-stakeholder concept might be affirmatively used by professional planners to achieve better outcomes for multi-species communities. This means conceiving of urban development not as a battle of human progress against biodiversity conservation, but a multivariable negotiation to reach ‘good enough’ outcomes for a multitude of organisms. I conclude that contemporary spatial planning’s ethical aims of creating quasi-stable urban coexistences demands developing deliberative processes of decision-making with and in a multispecies community.
-
(2019)An investigation into switchable polarity ionic liquids was carried out to find greener alternative substituents and still obtain a switchable polarity ionic liquid. First for fluorinated compounds (fluorinated alcohol and amine) with a non-fluorinated hydroxylamine to form a mixed carbamate, then replacing the superbase with a basic tertiary (or secondary) amine. The trigger molecule for switching polarity was CO2. It was found that O-hexylhydroxylamine was a suitable replacement for fluorinated ethanol and fluorinated ethylamine to work with DBU (superbase) to form a switchable polarity ionic liquid. The three amines of triethylamine (TEA), diisopropylethylamine (Hünigs base) and diisopropylamine (DIPA) were inconclusive or unsuccessful. Both TEA and DIPA require further alternative analysis for a conclusive result while Hünigs base was proven to be unsuccessful. These reaction products were characterised with 1H and 13C NMR and ReactIR spectral data. Synthesis of hydroxylamine was also approached for a greener improvement. A new synthesis method is demonstrated that is successful using water and methylamine in ethanol working on reaction equilibria. The new method proposed had a yield of 29.1%, while the patent literature method that used hydrazine monohydrate (which is highly toxic and unstable unless in solution) gave a yield of 54.3% of hydroxylamine. A secondary investigation was also undertaken in to basicity effects of caesium carbonate on the CO2 addition to aniline, with and without a superbase present. The superbase used was tertramethylguanidine (TMG). Aniline, p-nitroaniline and p-methoxyaniline were tested for CO2 addition by formation of an amide peak in ReactIR. There was formation of the amide peak with caesium carbonate, though not as much as with the already known TMG. A concentration series of caesium carbonate and TMG in aniline was also devised to observe the effect the added caesium carbonate had on the aniline-TMG system in absorbing CO2. This was also analysed using ReactIR spectra. It was seen generally that by increasing the concentration of both/either TMG/Cs2CO3 there is an increase in carbamate. However further concentration series data is required before a generalised rule can be defined.
-
(2019)Tutkielman tarkoituksena on johdattaa lukija Ext-funktorin ja ryhmien kohomologian määritelmien ja teorian äärelle ja siten tutustuttaa lukija homologisen algebran keskeisiin käsitteisiin. Ensimmäisessä luvussa esitellään tutkielman olettamia taustatietoja, algebran ja algebrallisen topologian peruskurssien sisältöjen lisäksi. Toisessa luvussa esitellään ryhmien laajennosongelma ja ratkaistaan se tapauksessa, jossa annettu aliryhmä on vaihdannainen. Ryhmälaajennosten näytetään olevan yksi yhteen -vastaavuudessa tietyn ryhmän alkioiden kanssa, ja lisäksi tutkitaan erityisesti niitä ryhmälaajennoksia, jotka ovat annettujen ryhmien puolisuoria tuloja. Vastaan tulevien kaavojen todetaan vastaavan eräitä singulaarisen koketjukompleksin määritelmässä esiintyviä kaavoja. Kolmannessa luvussa määritellään viivaresoluutio sekä normalisoitu viivaresoluutio, sekä niiden pohjalta ryhmien kohomologia. Aluksi määritellään teknisenä sivuseikkana G-modulin käsite, jonka avulla ryhmien toimintoja voi käsitellä kuten moduleita. Luvun keskeisin tulos on se, että viivaresoluutio ja normalisoitu viivaresoluutio ovat homotopiaekvivalentit -- tuloksen yleistys takaa muun muassa, että Ext-funktori on hyvin määritelty. Luvun lopuksi lasketaan syklisen ryhmän kohomologiaryhmät. Neljännessä luvussa määritellään resoluutiot yleisyydessään, sekä projektiiviset että injektiiviset modulit ja resoluutiot. Viivaresoluutiot todetaan projektiivisiksi, ja niiden homotopiatyyppien samuuden todistuksen todetaan yleistyvän projektiivisille ja injektiivisille resoluutioille. Samalla ryhmien kohomologian määritelmä laajenee, kun viivaresoluution voi korvata millä tahansa projektiivisella resoluutiolla. Luvussa määritellään myös funktorien eksaktisuus, ja erityisesti tutkitaan Hom-funktorin eksaktiuden yhteyttä projektiivisiin ja injektiivisiin moduleihin. Viidennessä luvussa määritellään oikealta johdetun funktorin käsite, ja sen erikoistapauksena Ext-funktori, joka on Hom-funktorin oikealta johdettu funktori. Koska Hom-funktori on bifunktori, on sillä kaksi oikealta johdettua funktoria, ja luvun tärkein tulos osoittaa, että ne ovat isomorfiset. Ryhmien kohomologian määritelmä laajenee entisestään, kun sille annetaan määritelmä Ext-funktorin avulla, mikä mahdollistaa ryhmien kohomologian laskemisen myös injektiivisten resoluutioiden kautta. Viimeiseen lukuun on koottu aiheeseen liittyviä asioita, joita tekstissä hipaistaan, mutta joiden käsittely jäi rajaussyistä tutkielman ulkopuolelle.
-
(2021)Online hypothesis testing occurs in many branches of science. Most notably it is of use when there are too many hypotheses to test with traditional multiple hypothesis testing or when the hypotheses are created one-by-one. When testing multiple hypotheses one-by-one, the order in which the hypotheses are tested often has great influence to the power of the procedure. In this thesis we investigate the applicability of reinforcement learning tools to solve the exploration – exploitation problem that often arises in online hypothesis testing. We show that a common reinforcement learning tool, Thompson sampling, can be used to gain a modest amount of power using a method for online hypothesis testing called alpha-investing. Finally we examine the size of this effect using both synthetic data and a practical case involving simulated data studying urban pollution. We found that, by choosing the order of tested hypothesis with Thompson sampling, the power of alpha investing is improved. The level of improvement depends on the assumptions that the experimenter is willing to make and their validity. In a practical situation the presented procedure rejected up to 6.8 percentage points more hypotheses than testing the hypotheses in a random order.
-
(2019)The Arctic is warming faster than any other region on Earth due to climate change. The characteristics of the air masses overlying the Arctic play a key role when assessing the magnitude and implications of global warming in the region, but comprehensive studies of Arctic air mass properties covering long time series of measurements are scarce. The aim of this study is to use such a data set to quantify the key characteristics of Arctic air masses prior to transport to the human-habited Eurasian continent, and the typical conditions leading to Arctic events in Värriö. HYSPLIT (Hybrid Single Particle Lagrangian Integrated Trajectory) model was employed to calculate backward atmospheric trajectories arriving at SMEAR I (Station for Measuring Ecosystem-Atmosphere Relations) in Värriö for every hour in 1998-2017. An air mass was classified as Arctic if the backward trajectory arriving at Värriö was located north of 78 °N 72 hours before the arrival time. Data from SMEAR I, including meteorological variables and trace gas and aerosol concentrations, were then gathered in order to compare Arctic and non-Arctic air masses. Of all the hours that were analysed, 15.0 % were classified as associated with an Arctic air mass. The typically cyclonic curvature of the trajectories and the median duration of 10 hours per individual Arctic event were hypothesised to be due to Arctic air mass events being linked to passing low pressure systems. Arctic air masses were found to be colder and have lower moisture content in summer, when the difference at surface level was 5.6 °C and 1.7 g m-3 respectively, compared to non-Arctic air masses. In other seasons the differences were less pronounced, but average particle and trace gas concentrations were found to be notably lower for Arctic air masses than for non-Arctic air masses. An exception to this was ozone, which had 24.6 % higher average concentration in Arctic air masses in months between November and February, compared to non-Arctic air masses. The annual median aerosol particle concentration in Arctic air masses was found to be 308 cm-3 and only 129 cm-3 between November and March, on average. During a median year, the value of condensation sink (CS) was on average 65 % smaller in Arctic air masses than in the non-Arctic. The Kola Peninsula industry was observed to increase concentrations of SO2 and aerosol particles, particularly Aitken mode (25-90 nm) particles, of affected air masses. Overall, Arctic air masses were found to have several unique characteristics compared to other air masses arriving at SMEAR I, Värriö. As expected, Arctic air masses are colder and drier than non-Arctic air masses, but the difference is pronounced only in summer months. Other air mass characteristics, especially aerosol particle and trace gas concentration were generally found to be lower, unless the air mass was influenced by the industrial sites in the Kola Peninsula.
-
(2020)Investment funds are continuously looking for new technologies and ideas to enhance their results. Lately, with the success observed in other fields, wealth managers are taking a closes look at machine learning methods. Even if the use of ML is not entirely new in finance, leveraging new techniques has proved to be challenging and few funds succeed in doing so. The present work explores de usage of reinforcement learning algorithms for portfolio management for the stock market. It is well known the stochastic nature of stock and aiming to predict the market is unrealistic; nevertheless, the question of how to use machine learning to find useful patterns in the data that enable small market edges, remains open. Based on the ideas of reinforcement learning, a portfolio optimization approach is proposed. RL agents are trained to trade in a stock exchange, using portfolio returns as rewards for their RL optimization problem, thus seeking optimal resource allocation. For this purpose, a set of 68 stock tickers in the Frankfurt exchange market was selected, and two RL methods applied, namely Advantage Actor-Critic(A2C) and Proximal Policy Optimization (PPO). Their performance was compared against three commonly traded ETFs (exchange-traded funds) to asses the algorithm's ability to generate returns compared to real-life investments. Both algorithms were able to achieve positive returns in a year of testing( 5.4\% and 9.3\% for A2C and PPO respectively, a European ETF (VGK, Vanguard FTSE Europe Index Fund) for the same period, reported 9.0\% returns) as well as healthy risk-to-returns ratios. The results do not aim to be financial advice or trading strategies, but rather explore the potential of RL for studying small to medium size stock portfolios.
-
(2019)Arsenic (As) is a metalloid naturally present in the environment. Arsenic species vary in toxicity. Metal mining has contributed to the anthropogenic input of arsenic to groundwaters and surface waters. In this study, water samples were collected from 20 sample points in three mining-impacted study areas in Finland: the former Ylöjärvi Cu–W–As and Haveri Au–Cu mines, and the active Pyhäsalmi Zn–Cu mine. Six groundwater well samples, eleven surface water samples and three tailings seepage collection ditch samples were analyzed for dissolved arsenic speciation by HPLC-ICP-MS and for geochemical composition by ICP-MS, titration, and ion chromatography. Dissolved arsenic concentrations ranged from 14.2 to 6649 µg L-1 in samples collected at the Ylöjärvi study area, from 0.5 to 6.2 µg L-1 in samples collected at the Haveri study area, and from 0.2 to 9.4 µg L-1 in samples collected at the Pyhäsalmi study area. In all study areas, measured dissolved arsenic concentrations showed a general decrease from the tailings to the surroundings. Speciation analysis showed that two of the samples collected at the Ylöjärvi study area had arsenite [As(III)] as the dominant form of dissolved inorganic arsenic (iAs), three had arsenate [As(V)] as the dominant form of dissolved iAs, and four had a mixture of both. In the water samples collected at the Haveri and Pyhäsalmi study areas, all concentrations of dissolved arsenic species were below method detection limits. Also, none of the 22 water samples analyzed for arsenic speciation had dissolved MMA or DMA concentrations above method detection limits. Identification of dissolved arsenic species in the sampled waters in Haveri and Pyhäsalmi, and of MMA and DMA in all sampled waters requires more detailed study. A significant 2-tailed Pearson correlation between dissolved arsenic and dissolved molybdenum (Mo) (r=0.80**, n=20), and dissolved arsenic and dissolved potassium (K) (0.68**, n=19) suggests that in these three study areas the distributions of dissolved arsenic and Mo, as well as dissolved arsenic and K may be controlled by the same environmental variables. Anomalously high maximum concentrations of dissolved Al, Ca, Co, Cu, Fe, Ni, and SO4 were measured in surface water samples collected at the Ylöjärvi and Haveri study areas, and in a seepage collection ditch sample collected at the Pyhäsalmi study area.
-
(2020)Biomass is an important parameter for crop monitoring and management, as well as for assessing carbon cycle. In the field, allometric models can be used for non-destructive biomass assessment, whereas remote sensing is a convenient method for upscaling the biomass estimations over large areas. This study assessed the dry leaf biomass of Agave sisalana (sisal), a perennial crop whose leaves are grown for fibre and biofuel production in tropical and subtropical regions. First, an allometric model was developed for predicting the leaf biomass. Then, Sentinel-2 multispectral satellite imagery was used to model the leaf biomass at 8851 ha plantation in South-Eastern Kenya. For the allometric model 38 leaves were sampled and measured. Plant height and leaf maximum diameter were combined into a volume approximation and the relation to biomass was formalised with linear regression. A strong log-log linear relation was found and leave-one-out cross-validation for the model showed good prediction accuracy (R2 = 0.96, RMSE = 7.69g). The model was used to predict biomass for 58 field plots, which constituted a sample for modelling the biomass with Sentinel-2 data. Generalised additive models were then used to explore how well biomass was explained by various spectral vegetation indices (VIs). The highest performance (D2 = 74%, RMSE = 4.96 Mg/ha) was achieved with VIs based on the red-edge (R740 and R783), near-infrared (R865) and green (R560) spectral bands. Highly heterogeneous growing conditions, mainly variation in the understory vegetation seemed to be the main factor limiting the model performance. The best performing VI (R740/R783) was used to predict the biomass at plantation level. The leaf biomass ranged from 0 to 45.1 Mg/ha, with mean at 9.9 Mg/ha. This research resulted a newly established allometric equation that can be used as an accurate tool for predicting the leaf biomass of sisal. Further research is required to account for other parts of the plant, such as the stem and the roots. The biomass-VI modelling results showed that multispectral data is suitable for assessing sisal leaf biomass over large areas, but the heterogeneity of the understory vegetation limits the model performance. Future research should address this by investigating the background effects of understory and by looking into complementary data sources. The carbon stored in the leaf biomass at the plantation corresponds to that in the woody aboveground biomass of natural bushlands in the area. Future research is needed on soil carbon sequestration and soil and plant carbon fluxes, to fully understand the carbon cycle at sisal plantation.
-
(2020)Automatic readability assessment is considered as a challenging task in NLP due to its high degree of subjectivity. The majority prior work in assessing readability has focused on identifying the level of education necessary for comprehension without the consideration of text quality, i.e., how naturally the text flows from the perspective of a native speaker. Therefore, in this thesis, we aim to use language models, trained on well-written prose, to measure not only text readability in terms of comprehension but text quality. In this thesis, we developed two word-level metrics based on the concordance of article text with predictions made using language models to assess text readability and quality. We evaluate both metrics on a set of corpora used for readability assessment or automated essay scoring (AES) by measuring the correlation between scores assigned by our metrics and human raters. According to the experimental results, our metrics are strongly correlated with text quality, which achieve 0.4-0.6 correlations on 7 out of 9 datasets. We demonstrate that GPT-2 surpasses other language models, including the bigram model, LSTM, and bidirectional LSTM, on the task of estimating text quality in a zero-shot setting, and GPT-2 perplexity-based measure is a reasonable indicator for text quality evaluation.
Now showing items 1-20 of 271