Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Vuorikoski, Juho (2019)
    Yleisesti Sir William Rowan Hamiltonia pidetään vektorialgebran luojana. Hän oli 1800- luvun alussa syntynyt arvostettu matemaatikko. Yritettyään vuosia löytää tapaa yleistää kompleksilaskenta kolmeen ulottuvuuteen, hän lopulta vuonna vuonna 1843 keksi kvaterniot, kompleksilukujen laajennuksen neliulotteiseksi luvuiksi. Hamiltonin kvaternihin pohjautuva järjestelmä on ensimmäinen julkaistu vektorialgebra. Hamilton oli jo aikanaan kuuluisa ja vaikutusvaltainen, mutta siitä huolimatta kului yli kaksikymmentä vuotta, ennen kuin hänen järjestelmäänsä alettiin käyttää laajemmin. Vuosisadan vaiheessa tämän järjestelmän pohjalta luotiin moderni vektorialgebra. Mutta modernin järjestelmän pohjaksi harkittiin myös toista järjestelmää. Tämän oli luonut saksalainen kouluopettaja Hermann Grassmann (1809-1877). Päinvastoin kuin Hamilton, Grassmann ei ollut aikalaisilleen tunnettu saavutuksistaan matematiikan alalla. Itseasiassa, hän teki ensimmäinen matemaattinen julkaisunsa yli 30 vuotiaana. Grassmannia oli jo vuosia kiehtonut idea suorien yhteenlaskusta ja vuonna 1844 hän vihdoin julkaisi kirjan, joka lähti tuosta yksinkertaisesta ideasta ja päätyi yhdeksi ensimmäisistä oikeista vektorialgebrojen esityksestä ja ensimmäiseksi lineaarialgebran esitykseksi. Valitettavasti hänen tuntemattomuutensa ja se että hänen aikalaisensa pitivät kirjaa pahamaineisen vaikealukuisena, varmisti, että kirja jäi ilman suurempaa huomiota. 15 vuotta myöhemmin hän päätti yrittää uudelleen ja alkoi kirjoittaa järjestelmästään parannettua versiota. Tämän hän lopulta julkaisi vuonna 1863 nimellä “Ausdehnungslehre” tai englanniksi “Extension Theory”. Tämäkin kirja jäi vaille suurempaa huomiota hänen elinaikanansa, mutta noin sata vuotta myöhemmin Grassmannin työt nousivat uuteen arvoon. Tämä johtui siitä, että “Extension Theory” on paljon muita aikalaisia järjestelmiä abstraktimpi ja määrittelee kaiken n-ulotteisesti, sen sijaan että asiat määriteltäisiin lähinnä kolmessa ulottuvuudessa (tai neljässä kvaternioiden tapauksessa). Tässä järjestelmässä ristitulon sijasta, meillä on ulkotulo ja sisätulo määritellään tämän ulkotulon kautta ja on paljon monimutkaisempi kuin normaalissa vektorialgebrassa. Tieteen kehittyessä tarpeet käsitellä asioita n-ulotteisesti kasvoivat ja asiat joidenka takia Grassmannin aikailaiset eivät hänen töitään ymmärtäneet, toivat hänen työnsä takaisin parrasvaloihin.
  • Rawlings, Alexander (2021)
    This thesis presents the results from seventeen collisionless merger simulations of massive early-type galaxies in an effort to understand the coalescence of supermassive black holes (SMBHs) in the context of the Final Parsec Problem. A review of the properties of massive early-type galaxies and their SMBHs is presented alongside a discussion on SMBH binary coalescence to motivate the initial conditions used in the simulations. The effects of varying SMBH mass and stellar density profiles in the progenitor initial conditions on SMBH coalescence was investigated. Differing mass resolutions between the stellar particles and the SMBHs for each physical realisation were also tested. The simulations were performed on the supercomputers Puhti and Mahti at CSC, the Finnish IT Centre for Science. SMBH coalescence was found to only occur in mergers involving SMBH binaries of equal mass, with the most rapid coalescence observed in galaxies with a steep density profile. In particular, the eccentricity of the SMBH binary was observed to be crucial for coalescence: all simulations that coalesced displayed an orbital eccentricity in excess of e=0.7 for the majority of the time for which the binary was bound. Simulations of higher mass resolution were found to have an increased number of stellar particles able to positively interact with the SMBH binary to remove orbital energy and angular momentum, driving the binary to coalescence. The gravitational wave emission from an equal mass SMBH binary in the final stages before merging was calculated to be within the detection limits required for measurement by pulsar timing arrays. Mergers between galaxies of unequal mass SMBHs were unable to undergo coalescence irrespective of mass resolution or progenitor density profile, despite the binary in some of these simulations displaying a high orbital eccentricity. It was determined that the stellar particles interacting with the SMBH binary were unable to remove the required orbital energy and angular momentum to bring the SMBHs to within the separation required for efficient gravitational wave emission. A trend between increasing mass resolution and increasing number of stellar particles able to remove energy from the SMBH binary was observed across all the simulation suites. This observation is of paramount importance, as three-body interactions are essential in removing orbital energy and angular momentum from the SMBH binary, thus overcoming the Final Parsec Problem. As such, it is concluded that the Final Parsec Problem is a numerical artefact arising from insufficient mass resolution between the stellar particles and the SMBHs rather than a physical phenomenon.
  • Haapanen, Sara Anne (2017)
    This thesis calls to question what impact the public nature of the Finnish breadline has on the levels of stigmatisation, in combination with the socio and spatial impacts. The effects are concerned predominately with those that use the breadline, but also the general public and those that provide the aid. Areas studied looked at how those using the food aid felt they were perceived by people around them and how it affected behaviour and emotions. Did it have any effect on the local social relations and class and how those with perceived higher status acted in return? Finally how did the queue directly impact the area? The breadline studied was in Kallio, Eastern Helsinki; a mixed area known for being a traditional working class area with a reputation for being slightly rough, but also a trend setting area with low level gentrification. This and other breadlines have been a focus of media interest, generally concerned with how so many people are being forced to turn to it for help in spite of the reputation of solid welfare state. The lines involve a long wait, and hence become very public affairs due to often being in busy, town areas. In comparison the British system of food charity is a more private affair. Provisions are made through the use of food banks with no large queues, which removes the element of public view; would those in need of food aid find this a more acceptable method for help? The main form of research was through the process of immersive observation and observations combined with the use of a small-scale questionnaire. A thematic analysis revealed several common themes and methods with which users of food aid were able to utilise in order to help deal with some of the stigmatisations and their created societal class. The results led to a conclusion that the use of public space is problematic in terms of welfare aid. Not only does it does it further increase stigmatisation of the people in the line and how they deal with it, but also it directly effects the area. There is further evidence to indicate there is a power struggle for 'ownership' of the area between the general public, those using the breadline but also those that provide it. The area of study has become a home for social friction. In conclusion there are good grounds and supportive evidence, both in practise and from breadline user preference, to favour a different practise of food aid.
  • Garmash, Olga (2016)
    A myriad of different volatile organic compounds is emitted to the atmosphere through biogenic and anthropogenic pathways. In the atmosphere, they get oxidised forming products of lower volatility, which may then condense and contribute to the formation and growth of aerosol particles. Extremely low-volatility organic compounds (ELVOC) is a group of highly oxidised molecules that were recently observed in oxidation of biogenically emitted molecules, monoterpenes. It has been shown that ELVOCs from monoterpene oxidation in the atmosphere explain most of the aerosol growth rates in the boreal forest. In this work, I investigate the ELVOC formation from aromatic compounds, which are primarily emitted from anthropogenic sources. This thesis focuses on oxidation of benzene, a simplest aromatic molecule, although ELVOCs forming from toluene, naphthalene and phenol are also presented. The experimental work with all compounds, except phenol, was conducted in a flow tube at the Universtity of Helsinki, Finland, while in-depth study of benzene was performed in the chamber at Forschungszentrum Jülich GmbH, Germany. The oxidation products were detected using nitrate-based scheme chemical ionisation atmospheric pressure interface time-of-flight mass spectrometer (CI-APi-TOF). It was found that all four molecules in the reaction with a hydroxyl radical produced ELVOC. In case of benzene, the detected ELVOC monomers had maximum of 11 oxygen atoms, while dimers had up to 18. Toluene and naphthalene oxidation gained ELVOC monomers with maximum 10 oxygen atoms. The possibility of multiple reactions with the hydroxyl radical, however, could not be eliminated as the products of aromatic oxidation usually have higher reaction rate coefficients than their parent molecule. The average ELVOC molar yield in benzene oxidation was 3.7%, while in phenol case it was 1.5%, which suggested that primary OH attack and further autoxidation is an important pathway for ELVOC formation in oxidation of benzene. The study of the ELVOC – aerosol particle interactions in the chamber revealed that ELVOCs from aromatic precursors behave similarly to the ELVOCs formed from monoterpenes, rapidly condensing on the introduced aerosol particles. The organic fraction of the aerosols had similar O:C ratio to the total gas-phase ELVOC, indicating that ELVOCs were primary condensing species. The decreasing O:C ratio of aerosol phase with higher aerosol loading confirmed that ELVOCs are dominating aerosol growth at low aerosol loading. In this thesis, also a review of known chemical pathways of benzene oxidation is presented and some further steps towards ELVOC formation are suggested. Benzene oxidation pathways may serve as a model for studying the oxidation of other aromatic molecules. Substituted aromatic molecules are more reactive with the hydroxyl radical and are likely to yield more ELVOCs, which may be dominant factor in the aerosol growth in urban and industrialised areas.
  • Chua, Samuel De Xun (2023)
    The behaviour of Greenland's tidewater glaciers is crucial for the understanding of the Greenland Ice Sheet. The retreat of those marine-terminating glaciers has far-reaching implications, impacting not only the regional hydrography but also the diverse fjord ecosystems. Here, this study investigated the rapid retreat of Narsap Sermia (NS), a tidewater glacier located in Southwest Greenland. Between 1987 and 2022, the volume of ice discharged from NS increased by 45%, a rate more than double the Greenlandic mean. This destabilization led to retreat events occurring in three distinct episodes: 2004-2005, 2010-2012, and 2019-2021. The study identified that changes in subglacial hydrology were pivotal in triggering and sustaining these retreats. Drainage of ice-dammed lakes or increased meltwater resulting from heatwaves over the ice sheet suddenly increased subglacial freshwater discharge, subsequently instigating these retreat events. Once initiated, exposure to elevated ocean temperatures or retreat into a glacial trough further sustained ice loss at the terminus, eventually resulting in a collapse of the glacial front. As of the summer of 2023, Narsap Sermia is still retreating, and the study anticipates that further retreat of approximately 3 kilometers is inevitable. Subsequently, should air and ocean temperatures continue to rise, Narsap Sermia is poised to retreat further for 30 km, dramatically transitioning into a land-terminating glacier. This drastic transformation could occur in as little as 30 years, with profound consequences for local eco-hydrology and nearby communities.
  • Aarnio, Leo Tomi Johannes (2017)
    The debate over antidepressants, especially SSRIs, has lasted for more than a decade, the controversy revolving mostly around their efficacy over placebo as a treatment of depression. Sure enough, one gets conflicting results depending on which primary outcome measure is chosen as that operationalizing 'treatment effect' and where to delimit its 'clinically significant' size. Moreover, including only published studies in an evidence base assumed wholly unbiased artificially inflates the efficacy claims being made. The stakes are high: depression is estimated as one of the costliest of all illnesses, accounting for as much as 12% of the global burden of disease. However, so far, the debate has been nitpicking in the sense of not seeing the forest for the trees; and the poor forestry may just end up killing more than the weight of the already hefty one eighth. What has not been pointed out prior, then, is that there is something more fundamentally wrong here; something only indirectly reflected in the multiple points of controversy, responsible for this and other debates akin to it resting in a state of stalemate and confusion. I argue that this something is the wrong statistical paradigm we've embraced in clinical guideline development. I make my case by identifying two classes of controversy in the antidepressant debate: those related with patient preference and those with model choice. I hint at how these issues could be resolved, if a better framework was adopted, yet cannot even hope to be resolved within the present paradigm. In terms of the first class of issues, I argue that we haven't been able to agree upon an appropriate operationalization of 'treatment effect', nor its threshold of 'clinically significant', nor 'severity of depression', because we have discounted decision theory and the incorporation of patient preference in treatment choice to go along with it. In terms of establishing the conclusions to a given operationalization of any of the above by the means of valid argument based on premises backed up by trustworthy evidence, there is next to nothing to cling on to in the present guidelines of depression. For valid argument, we require the language of decision theory; for a purposeful evidence base we require utilities and outcome measures so chosen as to be able to establish rational patient centered choice. We need predictions of all important treatment outcomes for a patient exchangeable with some subset of those in the evidence base, not parameter inferences of summary statistics of treatment effect, the size of which is interpreted from a vantage point dependent on arbitrary choices over the primary outcome measure and a cut-off value for clinical significance. In terms of the second class of issues, we haven't taken proper account of uncertainty in deriving the interval estimates for our already ill-founded meta-analytic summary effects either. What full probability model to entertain for one's inferences – how to account for publication bias, between trial heterogeneity, inconsistency, poor quality trials and the like – is a matter of subjective model choice. The seemingly 'objective' protocol having been applied across the board is based on classical fixed and random effects meta-analyses with a tacit assumption of zero bias. Applying such protocol means placing all our eggs in a broken basket. I show that at present the meta-analytic models endeared rest on false assumptions, yielding biased and overconfident interval estimates. Since the evidence statements depend on whether an arbitrarily chosen threshold value for reaching 'clinical significance' is included in the fallacious interval estimate inferred, the evidence statements, that in turn ground the recommendations of the guidelines, are fallacious, too. I show the 'subjectivity' of the allegedly 'objective' evidence statements by conditioning the inferences over meta-analytic summary statistics of treatment effect on a set of plausible candidate models none of which, importantly enough, can be considered 'objective' in the sense of dominating as model choices universally preferred over all the others. I apply the framework to an evidence base used in the leading guideline of depression treatment. I show how the flexibility of the Bayesian framework enables more credible meta-analytic models, and better evidence statements with more intuitive interpretations. The Bayesian paradigm also allows for merging the output from multiple candidate models in a model-averaged posterior predictive better calibrated for rational choice than any classical parameter estimator could ever hope to be. The antidepressant debate is therefore not only a singular case with problems particular to it, but rather one sad example of classical statistics, and the evidence based medicine movement it has hijacked, being unable to deal with decision nor uncertainty. What to measure, how, and where to draw threshold values cannot be answered without accounting for patient preference. Since parameter and model uncertainty are always present, this uncertainty needs to be accounted for one way or the other, the most plausible means so far offered being probability calculus, the coherent application of which requires assigning distributions to unknowns classical statistics treats as fixed. To be able to convincingly deal with both decision and uncertainty is a prerequisite for any viable framework steering the development of clinical guidelines that ultimately cannot help but concern decision under uncertainty, be the application to the treatment of depression, or any other ailment found in DSMIV, or ICD10, for that matter. The wrong paradigm is bankrupting us, placing a serious threat on the credibility of all applied medicine. Something needs to be done.
  • Maunu, Liisa (2023)
    Macquarie Island is a subaerial fraction of oceanic crust where lithology from mantle peridotites to crustal gabbro, dolerite, and extrusive rocks are present, thus providing a unique opportunity to study geochemistry and petrology of the oceanic crust. Macquarie Island represents a mid-ocean crust ophiolite in which the potential geochemical modification of continental crust and effects of subduction initiation are absent. A genetic link between plutonic and extrusive rocks and processes leading to formation of the ophiolite sequence were studied in this thesis. A set of samples representing different rock types of the oceanic crust were studied petrographically as well as for whole-rock major and trace element geochemistry. Selected samples were studied for chromian spinel, silicates, and apatite major and minor component geochemistry. Harzburgites of Macquarie Island are depleted in trace element composition and are not a straightforward residue for the source of the crustal section of the island. Melt infiltration of basaltic melt into potentially former lherzolitic mantle source has been dominating process leading to formation of Macquarie Island oceanic crust. As a consequence to melt infiltration plagioclase-bearing wehrlites recrystallized and these rocks probably acted as a more enriched source for crustal rocks.Major and trace element data show that fractional crystallization has not been significant process forming the island. Extrusive basalts are more primitive in MgO and SiO2 contents and more enriched in REE contents compared to gabbroic rocks of the island. This could be explained by porous fluids migrating through oceanic crust modifying both major and trace element compositions of the samples or extrusive and gabbroic rocks forming from different mantle sources. This study shows that formation of oceanic crust is much more complex than often assumed simple model of fractional crystallization from mantle melt leading to formation of the oceanic crust, or a hypothesized gabbro-dolerite-basalt plumbing system forming the genetically linked crustal rocks.
  • Kekäläinen, Pirkko (2016)
    The sampling was done in 2013 during the Integrated Ocean Drilling Program Expedition 347 on two sites in the river Ångermanälven estuary in Sweden. The area has had the highest rate of uplift in Fennoscandia since the last glaciation. The area was freed from ice 10 500 years before present, after which the shore displacement has been notable. The sedimentation environment has changed considerably from an ice proximal setting, through open sea to a less exposed estuary. The varve deposition is an ongoing process in the estuary since several thousands of years, and it has been correlated to annual discharge, which makes the estuary interesting for palaeoenvironmental studies. The aim of the study was to analyze the changes in the grainsize distribution and to link these changes to the environmental changes. The initial subsampling was made in Bremen in 2014 and the analysis in the laboratory of the Department of Geosciences and Geography of the University of Helsinki in 2015. A method, in compliance to the ISO 13320:2009 standard, was used in the laser diffraction particle size analysis. Also the water content and loss-on-ignition (LOI) was determined from selected samples. The results were processed with the statistics program GRADISTAT 8.0. The resulting figures were compared between the sites and displayed as a function of depth. The sediment consists of varying silt and sand deposits, in which both regular and irregular changes were seen. Interpretations of the sedimentation environments were made and compared with the results of earlier studies. At least three sedimentary units were recognized. Within the upper organic rich silt unit, the point of maximum salinity of the Baltic Sea was recognized. Also the effects of the shoreline displacement, as a coarsening of the sediment, with the decreasing distance to the source of the material, were noted. The unit was interpreted to the brackish Litorina Sea stage. The middle unit was more varied and the relationship between the two sites was more complex. A finer sample resolution would have been needed for an accurate interpretation of this unit. However the interpretation was the fresh water stage of the Ancylus Lake. The lower sandy unit showed indications of a glaciofluvial environment and also the change of distance to the source of the material. The interpretation was a fresh water lake and an ice proximal setting. This study utilized material from the CISU-project funded by the Academy of Finland (resolution 281143).
  • Jokela, Tuomas (2015)
    The Late Miocene (11.6–5.3 million years ago) was a period of global climatic cooling and aridification. These events also had an effect on land mammals, which began to adapt to the increasingly open and grass-dominated biomes. The Eurasian Pikermian fauna is a well-known example of this evolutionary trend, including many species adapted to the new environment known as the Pikermian Biome in the Eurasian midlatitudes. The aim of this study was to deduce the paleodiets of individual Pikermian herbivore taxa, to compare results across three localities as well as with with previous results, and to assess the biome they lived in. Fossil teeth of large terrestrial mammalian herbivores from three classical Late Miocene localities of the Pikermian Biome-Pikermi, Samos (Greece), and Maragheh (Iran)-were analyzed with the mesowear method. Mesowear is the wear of mammalian herbivore molar crowns, cusps, and facets that can be seen with the naked eye, and is determined by the animal's diet (browsing and/or grazing). The mesowear scores were used in a cluster analysis where the fossil taxa were clustered with modern taxa belongin to well-known dietary categories. The results indicated the dietary categories of the fossil taxa. Among individual taxa, the Maraghean rhinoceros _Chilotherium persiae_ gave a surprising browsing signal despite its hypsodonty. _Gazella_ from Pikermi and Samos clustered with browsers to browse-dominated mixed feeders, while the sample from Maragheh indicated a more grass-dominated mixed diet. The antelope _Tragoportax_ from Pikermi and Samos yielded results that indicate the Pikermian genus used more grass in its diet than the Samian one, even though Pikermi is regarded to have been more closed of the two localities. The abundant hipparionine horses, typical for the Pikermian Fauna and previously seen as an indicator of open savanna-like biomes, showed a wide range of diets, but none of the three hipparion populations included zebra-like grazers. The wide dietary range of the Maraghean mammals in the results suggests that Maragheh had a variable paleoenvironment that included both grassy openings and closed forest. The results confirm those of previous studies, depicting a fauna consisting of browsers and mixed feeders with a notable lack of specialized grazers. This suggests that the Pikermian Biome wasn't as open as the modern East African savannas, which it has been classically compared to, but instead a varied woodland with grassy openings.
  • Nur, Nabila (2024)
    Finnish society is rapidly becoming more diverse, which is also reflected in the housing preferences and choices. A growing group in the Finnish housing market is the children of immigrants also known as the second-generation. Second-generation successes and challenges in integrating into society have been a hot topic around Europe including in Finland. Despite the groups growing presence a little is known about their housing preferences or choices especially in the case of Finland. This thesis aims to study the housing preferences and choices of second-generation Somalis in Helsinki. The data of this research is based on fifteen semi-structured interviews of young second-generation Somalis between the ages of 25 and 29 and statistical data received from the City of Helsinki and Stat Finland that was visualized into thematic maps demonstrating the spatial distribution of Somali-speakers in Helsinki. This thesis is a part of the Helsinki Institute of Urban Regional Studies’s (Urbaria) project Housing& Migrants Immigrants, spatial capital, and urban housing diversity: A comparative study of non-native population groups in the Helsinki metropolitan area. The findings of this study highlighted how culture and religion affects the housing preferences and choices of second-generation Somalis and how this group is positioned between different cultural norms. The values of the interviewees are indicative of an ongoing assimilation process; however, the remaining importance of culture and religion prevents the full integration to Finnish housing culture. This is especially visible in relation to homeownership, which is a desired form of tenure for many but deemed as impossible to pursue in the current Finnish housing market as there are no Sharia-compliant mortgage options. Furthermore, the findings emphasized that the perception about the possibility of fulfilling one’s housing preferences had a major impact on the sense of belonging among the interviewees and the desire to build a life in Finland.
  • Westerlund, Jonas (2016)
    In this thesis we prove the Hurewicz theorem which states that the n-th homology and homotopy groups are isomorphic for an (n-1)-connected topological space. There exists proofs of the Hurewicz theorem in which one constructs a concrete isomorphism between the spaces, but in this thesis we avoid the construction by transferring the problem to the realm of CW complexes and cellular structures by a technique known as cellular approximation. Combined with the cellular homology groups and related results this technique allows us to analyse the space on a cell-by-cell basis. This reduces the problem significantly and gives rise to many methods not applicable otherwise. To prove the theorem we lay out the foundations of homotopy theory and homology theory. The singular homology theory is introduced, which in turn is used together with the concept of degree to define the cellular homology groups suitable for the analysis of CW complexes. Since CW complexes are built out of homeomorphic copies of the open unit disk extending to its boundary, it became crucial to prove various properties of these subspaces in both homotopy and homology. Fibrations, fiber bundles, and the Freudenthal suspension theorem were introduced for the homotopical viewpoint, while long exact sequences and contractibility played a great role in the homological considerations. CW approximation then made it possible to apply all this machinery to the topological space in question. Finally, the boundary homomorphisms from the long exact sequence in both homotopy and cellular homology theory turn out to be the same which makes it possible to show the existence of an isomorphism between the groups.
  • Murto, Sonja (2017)
    In recent decades, rapid urbanization together with industrialization has led to an increase in anthropogenic emissions, resulting in high air pollution concentrations and poor air quality particularly in developing countries, such as in China. Due to both the enhanced environmental and severe public health risks poor air quality is causing and the climate impacts of aerosols, it is of great interest to study and understand aerosol particles and their impact on our surroundings. Aerosols affect the radiative properties of the atmosphere and the surface energy balance. The impact of aerosols on the surface radiative fluxes of the urban surface energy balance is widely known, but the impact on the turbulent energy fluxes, which are important components in the energy balance, has until now remained unclear. To extend the knowledge of aerosol impacts on all the energy balance components, a simple urban land surface model (SUEWS) during the period of 2006-2009 is used, together with aerosol data, Aerosol Optical Depth (AOD), received from an AERONET station located in Beijing. With the use of commonly measured meteorological variables together with parameters defined for the study area of 1 km radius around a meteorological tower, the components of the urban surface energy balance are simulated by the model. For further data analysis, the data are divided into thermal seasons and pollutant categories according to the available AOD-data. Extreme polluted conditions are achieved during 24 % for the time of available AOD-data, additionally showing relatively less situations with poor air quality (8 %) in winter compared to 27 % observed in summer. The aim of this study is to analyse how much aerosol particles can modify the different surface energy balance components, particularly focusing on the turbulent fluxes. The model is evaluated against observed turbulent fluxes in the same tower, showing an overestimation of the sensible heat flux and an underestimation and a better model performance of the latent heat flux. Still, the diurnal behaviour of the fluxes is shown to be well reproduced by the model. The behaviour of the modelled components is further investigated, showing a clear monthly variation for almost all the fluxes contributing to the surface energy balance. The behaviour of the total energy balance is in general controlled by the wet (occurring from May to October) and dry periods, distinguishing the climate in Beijing. The sensible heat flux is the dominant flux in March, accounting for 59 % of the available energy, whereas during the wet periods, higher portion of the available energy is consumed by the turbulent latent heat flux (61 % in August). Adding the effect of aerosols, the results clearly show how the net radiative flux is decreased in poor air quality conditions, giving differences of 138 W/m2 in the median flux due to aerosol loading in the atmosphere. The main finding of this study is that aerosols also influence the turbulent fluxes, with largest aerosol impact on the sensible heat flux occurring during thermal spring (66 W/m2 difference between clean and polluted air conditions). Likewise, in summer, when the latent heat flux is the largest contributor for consuming the available energy, the influence of aerosols is most visible (25 W/m2 difference). This study highlights the importance of maintaining measurements of aerosol concentrations and characteristics of the pollutants over urban areas due to their influence not only on the radiative fluxes, but all the components of the surface energy balance, which can further alter the water circulation and give rise to other environmental risks. These findings can therefore be used in urban planning and issues related to water management and air pollution regulations.
  • Mattero, Max (2024)
    This thesis studies gas-rich galaxy mergers at redshifts of z ∼ 1-2 using numerical simulations, with a particular focus on the effect of feedback from active galactic nuclei (AGNs). In total, 16 galaxy mergers at redshifts z = 1 and z = 2 were modeled using the simulation codes KETJU and GADGET-3. The simulations were performed on the supercomputer Mahti located at the Finnish IT Centre for Science (CSC). AGN feedback can be described as the radiative and mechanical energy released through accretion, which act to heat and disperse the remaining gaseous material surrounding the central supermassive black hole (SMBH). The feedback mechanisms include, for example, photoionization heating due to high-energy photons and winds and jets driven by the AGN. Numerically, AGN feedback was implemented using two models in this thesis: thermal and kinetic AGN feedback, in which the gas particles are either heated or ‘kicked’, respectively. In addition to AGN feedback, the simulations included metal-dependent gas cooling, stochastic star formation, and stellar feedback. The simulated progenitor galaxies were gas-rich spirals consistent with observed galaxies at redshifts z = 1 and z = 2. The virial masses of the progenitors were set to correspond to typical massive galaxies at their redshifts using the Press-Schechter mass function, while the initial masses for the central SMBHs were set using observed MBH-M⋆ and MBH-σ⋆ relations. The gas fractions and metal abundances of the progenitors were calibrated using observational data at their respective redshifts. The KETJU and GADGET-3 simulations produced very similar results for the overall evolution of a given merger configuration. Consistent with earlier studies, the kinetic feedback was observed to be significantly more effective at removing gas from the galaxies than the thermal feedback. The combined effect of AGN and stellar feedback was observed to strongly suppress star formation, with the star formation of one merger being almost completely shut down. The thermal and kinetic feedback models caused noticeable differences in the orbital evolution of the SMBH binaries. Merger timescales were significantly longer for the SMBHs in the KETJU simulations with kinetic feedback. In general, the merger timescales increased with decreasing initial eccentricity for the SMBH binary. The merger remnants were compared to observed MBH-σ⋆, R1/2-M⋆, fgas-M⋆, and mass-metallicity relations. Overall, the remnants were reasonably consistent with the observed relations. Hence, we can conclude that AGN feedback plays a crucial role in galaxy evolution and that both the thermal and kinetic feedback models are able to produce realistic high-redshift galaxies.
  • Koskentausta, Juho (2023)
    Global warming is rapidly reducing the Arctic sea ice cover. Along with its major impacts in the Arctic, the sea ice loss is known to affect the climate in remote continental areas. In this thesis, the remote effects are studied by analysing data from experiments carried out using the ECHAM6 atmospheric general circulation model forced with simple sea ice concentration and sea surface temperature configurations. The European and Asian midlatitude winter responses of surface air temperature are investigated, together with anomalies of variables potentially affecting them: sea level pressure, thermal advection, adiabatic and diabatic heating and surface energy fluxes. Arctic sea ice loss was found to have contributed weakly to the European warming from the 1980s to the 2010s. With sea ice and sea surface temperature conditions projected for 2071–2100, the warming response was about 1 °C relative to the 1979–2008 climatology, despite a negative North Atlantic Oscillation phase response. In Asia, the model simulates slight cooling of about 0.5 °C east of the Urals in the past and in the eastern parts of the continent in the future. However, the cooling responses are overwhelmed by the warming effect of global forcings. The effects of internal variability are large, and the role of the thermodynamic processes and surface energy fluxes in the link between the Arctic and Eurasia is not very clear. However, the temperature responses are mostly consistent with previous research, even though the model does not capture the observed past changes well.
  • Kirjonen, Sakarias (2024)
    One of the main goals of materials research is to find the link between the properties of materials and their fundamental structures. The distinct properties of thin films, categorized as materials from a few single layers of atoms to some hundreds of nanometers, have enjoyed an unparalleled demand in modern device manufacturing, and thus the investigation of factors which determine thin film structure and morphology is a vital area of research. In the case of thin films, their final structures can often be connected back to the initial film formation stages, such as in the crystallographic growth competition during island growth and coalescence. In this thesis, thin film growth stages are studied from the perspective of how they are affected by impurities. From the initial diffusion of adatoms on a bare substrate; to the formation of islands, their growth and coalescence; to the mobility of grain boundaries and bulk diffusion leading to the formation of a fully continuous layer; impurities influence each of these thin film growth processes in a multifaceted way, either acting as growth inhibitors, promoters or potentially neutral agents. To this end, Ag and Cu thin films were synthesized by magnetron sputtering onto SiO_2/Si substrates, with thicknesses ranging from 3 nm to 30 nm using varied deposition conditions, with the addition of a 3 nm amorphous carbon layer to limit further restructuring and oxidation. Impurities were let into the deposition atmosphere via a controlled opening of a leak valve, corresponding to a step-wise increase of base pressure from 10^(-8) Torr to 10^(-6) Torr and finally 10^(-5) Torr. The full range of thin films was deposited with each base pressure (except for 10^(-5) Torr for Cu) using two deposition rates, around 0.1 Ås^(-1) and 2 Ås^(-1). Each film was characterized ex situ with ellipsometry, 4PP, XRD and AFM to map the morphological and microstructural evolution during film growth. It is found that impurities tend to inhibit island coalescence and initial grain growth, resulting in a reduction of continuous film formation thickness and average grain size, leading to the formation of flatter films with, in most cases, less surface roughness. In later stages, it is found that impurities may allow for more grain growth by their incorporation into the growing facets. In terms of crystal structure, it is shown that impurities have a more pronounced effect on (111) oriented grains, inhibiting their growth, thus altering the preferred growth orientations of Ag and Cu by allowing (200) grains to grow larger. Grain radii and equivalent ellipse distributions showed the different responses of Ag and Cu to impurities. Ag films showed more prominent effects when a lower deposition rate was used, highlighting the impact of impurities on diffusive processes, while Cu films exhibited more effects with the use of higher deposition rates, indicating that the role of impurities, in this case, was more significant after the formation of a continuous layer.
  • Strömberg, Jani (2021)
    Air temperatures are commonly higher in urban environments compared to rural ones. The energy input of solar radiation and its storage in urban surfaces changes the way the surface interacts with the atmosphere through turbulent fluxes and mixing processes. The complexity of radiative properties combined with the effect of urban geometry makes the magnitude of the effect radiation has on the dynamics of boundary layer flow an important area of study. The aim of this study is to understand and quantify how much the radiative processes alter the flow field and turbulence in a real urban street canyon in Helsinki. The model used is the large-eddy simulation (LES) model PALM, which solves for the flow and the most relevant atmospheric scales that describe interactions between the surface and atmosphere. An additional library called RRTMG (Rapid Radiative Transfer Model for Global Models) is used in this study to provide the radiation input impacting the boundary layer flow. Two embedded surface models in PALM, USM (Urban Surface Model) and LSM (Land-Surface Model) are used to solve the local conditions for radiative balance based on the output of RRTMG. Two model runs are made (RRTMG On & RRTMG Off), both identical in terms of the large-scale forcing boundary conditions and land-use data, but with additional radiation input in RRTMG On. The results show that radiation alters the low level stratification of potential temperature, which leads to more unstable conditions. Near-surface air temperatures within the canyon were increased by 3.9 C on average. Horizontal wind speeds increased by 76 % close to the ground compared to RRTMG Off. RRTMG On also showed a change in the structure of the topographically forced canyon vortex, as the low wind conditions enabled the radiative effects to have a stronger effect in its forcing. The center of the vortex changed in location more towards the center of the canyon and the vertical motions on opposing sides of the street were strengthened by 0.15 m/s in both vertical directions. Additionally both mechanical and thermal turbulence production increased with RRTMG On, while the thermal production remained smaller by one magnitude compared to mechanical production within Mäkelänkatu. Higher wind speeds and their variance gave rise to increased mechanical production of turbulence and radiative effects increased the thermal production. More research is however needed to determine thermal turbulence's role in situations with different meteorological conditions or in other cities.
  • Ukkonen, Aino (2018)
    The human papilloma virus (HPV) is the main cause of cervical cancer in women and HPV is the most common sexually transmitted infection. To prevent women from developing cervical cancer there are two methods in use, the primary method being vaccination against HPV and the secondary being screening. Some long term effects of screening and vaccination will not be observed during the first decades of vaccination, and therefore predictive mathematical models serve as an indicator of what to anticipate in the future. In this thesis we studied two types of cervical cancer, adenocarcinoma and squamous cell carcinoma. We determined how vaccination against a virus, associated with squamous cell carcinoma, together with screening programs affects adenocarcinomas caused by non-vaccine type HPV. A precancerous adenocarcinoma lesion, which is located deeper in the glandular cervical tissue, is difficult to detect directly in screening but can be found indirectly by uncovering a common type of cervical cancer, squamous cell carcinoma, which is easier to detect in screening. These two cancers are mostly associated with two different strains of HPV. When vaccinating against the strain that is found in squamous cell carcinomas the elimination of the precancerous stages of squamous cell carcinomas mean that the detection method for adenocarcinoma is impaired, which allows for a possible increase in adenocarcinoma prevalence. In this thesis we studied this possible increase. To predict the effect vaccination has on adenocarcinoma, we constructed a mathematical model of two HPV types, which were associated with squamous cell carcinoma and adenocarcinoma respectively. We modeled a vaccine that protects against the HPV types associated with squamous cell carcinoma but not adenocarcinoma. The model included the simplified natural history of HPV, the progression of the disease, the vaccination program and the screening program. For the two virus infections we developed a deterministic compartmental progression model. In the computations we modeled a cohort of women through their lifetime and studied the precancerous findings in screenings and the number of cancer cases in the cohort. To understand which model components contributed to the adenocarcinoma incidence a sensitivity analysis was conducted by varying the screening sensitivity, the force of infection and the recovery rates. The model yielded an increase in adenocarcinomas when vaccinating against the virus associated with squamous cell carcinoma. The incidence of adenocarcinoma correlated with the non-vaccine type HPV infections that would otherwise be found in screenings without vaccination. The increase in adenocarcinomas was more prominent in a sexually active population. Compared to the reduction in squamous cell carcinoma provided by the vaccine, the adenocarcinoma increase was, although positive, very minor.
  • Kaija, Kasperi (2018)
    Chord is a distributed hash table solution that makes a set of assumptions about its performance and how that performance is affected when the size of the Chord network increases. This thesis studies those assumptions and the foundation they are based on. The main focus is to study how the Chord protocol performs in practice by utilizing a custom Chord protocol implementation written in Python. The performance is tested by measuring the length of lookup queries over the network and the cost of maintaining the routing invariants. Additionally, the amount of data being exchanged when a new Chord node joins the network and how data has been distributed over network in general is also measured. The tests are repeated using various different networks sizes and states. The measurements are used to formulate models and those models are then used to draw conclusions about the performance assumptions. Statistical measurements of quality are used to estimate the quality of the models. The Ukko high performance cluster is used for running the Chord networks and to execute the tests.
  • Viitikko, Tanja (2023)
    Pathogens are everywhere in nature, so organisms have developed various defense mechanisms in order to defend themselves against the pathogens. Two of the defense mechanisms are known as resistance and tolerance. Resistance describes the host's ability to avoid being infected by the pathogen, while tolerance describes the host's ability to reduce the fitness loss caused by the infection. We assume that investing into resistance reduces the transmission rate of the pathogens and investing into tolerance reduces the host's virulence. Developing the defense mechanisms is costly to the host. In this thesis, we assume that the resources invested into resistance and tolerance are taken away from the host's fecundity. The independent but simultaneous evolution of resistance and tolerance is modeled with an SIS model. The model is studied with the methods of adaptive dynamics. We concentrate on finding continuously stable strategies, which serve as the evolutionary end points for the population. We study the varying ecological parameters to determine which strategies are optimal for the host in different environments. We find that for low values of transmission rate, the hosts favor resistance over tolerance. When the transmission rate increases, resistance is traded for tolerance and the host benefits more from high tolerance. Low values of virulence result in tolerance being favored over resistance. Increasing virulence leads to a change in the defense mechanism as for high values of virulence investing into resistance is more beneficial to the host. The same holds for recovery rate, as tolerance is favored for low values of recovery rate and changed for resistance when the recovery rate increases. Patterns and associations between resistance and tolerance are also studied. Positive correlation between resistance and tolerance is found with low values of transmission rate, low and high values of virulence and high values of recovery rate. Resistance and tolerance correlate negatively with high values of transmission rate, intermediate values of virulence and low values of recovery rate.
  • Granqvist, Sonja (2023)
    Substratum and environmental variables influence benthic algal species richness and community composition. Benthic habitats form complex connections within and between communities leading to unique water ecosystems. In order to better understand substratum relationships and the effects of environmental covariates towards microorganisms, this study focused on benthic diatoms in subarctic mountain ponds living in different substrata. Moreover, methodological choices are important for field survey in freshwater environments and thus in this study, we also compared sponge and brush sampling techniques to examine possible differences in benthic diatom species richness and community composition. We sampled 23 subarctic ponds between July and August 2022 in northern Fennoscandia. The samples were taken from stones and sediment. To analyse the differences between species richness for substrata and methods, we used paired Wilcoxon signed-rank test and paired t-test. In order to find out the most significant environmental variables influencing diatom species richness, we used generalized linear models (GLM). Differences in diatom community compositions were analysed using non-metric multidimensional scaling (NMDS), analysis of similarities (ANOSIM), and Jaccard similarity index. Finally in order to visualise the variation in community composition between stone and sediment samples explained by environmental variables, a redundancy analysis (RDA) was used. Benthic diatom species richness significantly differed between rock and sediment substrata where sediment was the most species rich substratum. Local environmental variables were influential towards diatom species richness, where water pH was the major determinant for both substrata. Diatom community composition did not differ significantly between rock and sediment substrata but was defined by environmental variables such as pond surface area and water pH having a strong influence on both substrata. No significant differences were found between sampling methods in terms of diatom species richness or community composition. Our results support the theory that sediment substratum contains the highest diatom species richness. Furthermore, the study highlights the importance of water pH on benthic diatoms regardless of substratum, supporting diatom reliability as bioindicators for water pH.