Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Valtonen, Riikka (2019)
    A 370 meter deep drill hole was drilled on the Kumpula Campus of the Univerity of Helsinki for teaching purposes at the end of the year 2015. This work deals with fragile structures in the borehole, which have been compared with existing research data from the Helsinki area and nearby outcrop data. Natural breaks were observed and measured from the drill core and that data was compared to the acoustic and optical images of the drill hole in the Wellcad software. The drill core was not oriented, so the dip direction and dip of the breaks were measured in the Wellcad software and the final correction was made in Excel. depth, possible filling, surface roughness, and slickenside feature of the breaks were recorded. A preliminary RQD value was also measured, which was excellent (94%). Five filling samples, which differed in color and texture, were subjected to XRD analysis. Approximately 700 natural breaks plus two crushed zones, with hearth loss were found found in the drill core. About 11% of the breaks were classified as slickensides. Most of the other break points were filled with minerals. a fracture group in which the gap was not open in the core was observed on the basis of the acoustic image. These kinds of fractures were about 11% of the natural breaks. Comparative field work was carried out on Kumpula campus and surroundings. There were altogether ten outcrops, three of them were road cuts. In addition to the location and direction and dipping, the visible length of the fracture, possible filling of surfaces, surface roughness, and density of the parallel fractures were observed. Observations were greatly affected by the quality and freshness of the outcrops. The main processing of the results was carried out in software Move in 3D, in which the fractures of the drill hole and the collected outcrop fracture data were compared, and the locations of the crushed zones were examined more closely. Both drill hole fractures and the fractures from the outcrops have one main dip direction to the northeast (ie the strike towards the NW), which is in line with previous studies in the Helsinki area. The study also revealed that the strikes of the fractures differ in the northern and southern sides on the hill: the main direction on the south side was in NW-SE, while on the north side it was in E-W direction. To further reinforce this finding, more data is needed, e.g. a comparative borehole on the north side of the hill would provide extra exposure for this possible direction. Another interesting research topic would be the more accurate classification of the drill hole fractures with the help of fillings. The preliminary studies show that slickensides differ from other filled fractures in both the orientation and filling and there are at least four different fillings as well. Such studies could help in estimating the relative ages of the brittle structures.
  • Mengxue, Lu (2023)
    Bioprinting has emerged as a cutting-edge technology to overcome the shortage of tissues and organs by the precise deposition of living cells and biomaterials into three-dimensional (3D) biomimetic constructs. However, the inadequate choice of bioinks has limited its widespread implementation and clinical transformation. Natural polymers, such as chitosan and alginate, are commonly used as bioinks due to their biocompatibility, biodegradability and similarity to extracellular matrix (ECM). These natural polymers are usually limited by their mechanical strength and have less tunable mechanical characteristics. Instead, synthetic polymers offer adjustable mechanical properties and good printability, and they are often used as sacrificial materials in 3D bioprinting. Hybrid hydrogels consisting of Pluronic F127 (PF) and natural polymers have been suggested to have good printability and rheological behaviors. However, PF tends to be cytotoxic at concentrations required for good printability. Another synthetic copolymer which comprises poly(2-methyl-2-oxazoline) (POx) (A-block) and poly(2-n-propyl-2-oxazine) (POzi) (B-block) was investigated as the potential alternative for PF. In this work, two different hybrid platforms consisting of synthetic POx-b-POzi /natural polymer (chitosan or alginate) and PF /natural polymer (chitosan or alginate) were formulated. The main focus of the study were on their printability and the potential of POx-b-POzi to replace PF as a sacrificial material in 3D bioprinting. POx-b-POzi and PF-based hybrid hydrogels were formulated and their printability was evaluated by rheology, mechanical compression, and 3D printing and printability assessment tests. The results showed that both POx-b-POzi and PF based hybrid hydrogels can be printed into different 3D structures, and the printed structures were successfully crosslinked. Although, the printability assessment tests and rheology showed that PF based hydrogels exhibits greater printability, POx-b-POzi also meets the critical requirements for bioinks.
  • Penttinen, Jussi (2021)
    HMC is a computational method build to efficiently sample from a high dimensional distribution. Sampling from a distribution is typically a statistical problem and hence a lot of works concerning Hamiltonian Monte Carlo are written in the mathematical language of probability theory, which perhaps is not ideally suited for HMC, since HMC is at its core differential geometry. The purpose of this text is to present the differential geometric tool's needed in HMC and then methodically build the algorithm itself. Since there is a great introductory book to smooth manifolds by Lee and not wanting to completely copy Lee's work from his book, some basic knowledge of differential geometry is left for the reader. Similarly, the author being more comfortable with notions of differential geometry, and to cut down the length of this text, most theorems connected to measure and probability theory are omitted from this work. The first chapter is an introductory chapter that goes through the bare minimum of measure theory needed to motivate Hamiltonian Monte Carlo. Bulk of this text is in the second and third chapter. The second chapter presents the concepts of differential geometry needed to understand the abstract build of Hamiltonian Monte Carlo. Those familiar with differential geometry can possibly skip the second chapter, even though it might be worth while to at least flip through it to fill in on the notations used in this text. The third chapter is the core of this text. There the algorithm is methodically built using the groundwork laid in previous chapters. The most important part and the theoretical heart of the algorithm is presented here in the sections discussing the lift of the target measure. The fourth chapter provides brief practical insight to implementing HMC and also discusses quickly how HMC is currently being improved.
  • Tiihonen, Rosa (2016)
    Recent studies have suggested that iron (Fe) and manganese (Mn) oxides may play a role in the anaerobic oxidation of methane (AOM) in brackish coastal sediments of the Baltic Sea. However, the distribution of these oxides in coastal sediments had not yet been established. In this study sedimentary Fe and Mn dynamics were studied along Pohjanpitäjänlahti which is a silled estuary and the adjacent archipelago in Uusimaa, Finland. The estuary is fed by Fiskarinjoki and Mustionjoki rivers and it discharges into the Gulf of Finland through a narrow, salinity stratified strait. Sediment and porewater samples for chemical profiling were obtained by GEMAX™ coring, sediment slicing and Rhizon™ porewater extraction. Water samples were obtained by Limnos™ water sampler and analyzed for dissolved Fe and Mn. Concentrations of Fe and Mn and speciation of sediments were determined by sequential extraction, including a separate extraction scheme for sulfur-bound phases. The results of this study show that the distributions of iron and manganese are heterogeneous in the coastal zone of the Gulf of Finland. Dissolved Fe concentration decreases rapidly from the river mouth due to salinity induced flocculation and sedimentary Fe concentration decreases steadily offshore. In contrast, dissolved and sedimentary Mn concentrations are highest in the deep inner basin of Pohjanpitäjänlahti. This implies internal shuttling of Mn related to redox conditions in the estuary. The Fe and Mn speciation of inshore sites is dominated by more reactive phases such as poorly-crystalline and crystalline oxides, while at offshore sites, less reactive phases such as sheet silicates are more dominant. Fe and Mn oxides are present in all study sites throughout the sediment cores which make them theoretically available for Fe-Mn- mediated AOM.
  • McDonald, Isabel (2020)
    Talc is a problematic alteration mineral at the Kevitsa Ni-Cu-(PGE) mine in Sodankylä, Finland, and its distribution and control were assessed in this thesis. Kevitsa is a polymetallic mine hosted in an ultramafic intrusion, extracting Ni, Cu, Co, Au, Pt and Pd, which are of increasing importance in green energy technologies. Talc – a common alteration product in ultramafic rocks – detrimentally interferes with the recovery of copper in the flotation stage of ore processing when concentrations exceed 5 wt. %, thus affecting the economics of mine operations. It was found different talc concentrations had different spatial associations and controls, with three dominant styles identified, and a multi-stage genesis of talc alteration is proposed. The talc styles identified in the study are as follows: (style 1) pervasive talc-chlorite alteration, (style 2) talc-dolomite alteration haloes proximal to dolomite veins and (style 3) talc on brittle structures, associated with magnetite. Low values of talc between 0.2-0.5 wt.% (style 1) were found to have no preferential spatial distribution, occurring as background alteration throughout the intrusion. Intermediate values (between 1-5 wt. %) were associated with late brittle fractures and structures (style 3), with a notable association with the NE-flt-rv1 fault zone. Style (2) was found to have a dominant structural control, specifically being associated with north-south trending structures. Dominant structures with this association identified are NS-flt1_flt-002 and NS-flt-2_flt-009. Highest values (commonly exceeding >10 wt. %) manifest themselves as alteration haloes proximal to veins, where talc-carbonate replaces the intercumulus mineral phases. Here it is proposed that ‘low talc’ alteration, style (1), was the first talc association to occur, generated by late magmatic fluids or regional metamorphism accompanying amphibole and serpentine alteration. The association observed as style (2) was likely generated by the infilling of north-south trending structures by carbonate-talc veins through metasomatism by a CO2 rich metamorphic fluid, perhaps delivered by a deep-seated structure, often generating talc values in excess of 10 wt.%. The third stage is proposed to be talc enrichment via meteoric fluid percolation, after exhumation. This generated talc along brittle structures associated with magnetite style (3), and talc-carbonate concentrations may also be upgraded at this stage. Further enrichment of talc is observed at the surface, attributed to freeze thaw-cycles of permafrost upgrading talc values. The identification of these processes and controls on talc will not only have implications for the economics of Kevitsa as high talc zones can be avoided, but findings may have useful applications for mining of similar deposits in the Central Lapland Greenstone belt such as the nearby Sakatti Cu-Ni-(PGE) project, when it enters production.
  • Zhou, You (2015)
    Long wave (LW) radiation in the Earth's atmosphere is defined as the radiation at wavelengths longer than 4 µm (infrared). The short wave (SW) radiation wavelengths are less than 4 µm (visible light, ultraviolet). SW radiation is usually from solar origin. The absorbed solar SW radiation is closely balanced by the outgoing LW radiation in the atmosphere. This radiation balance keeps the global average temperature stable. The main cause of the current global warming trend is human expansion of the 'greenhouse effect'. Atmospheric greenhouse gases absorb the thermal LW radiation from a planetary surface. The absorbed radiation is re-emitted to all directions. Some of the energy is transferred back to the surface and the lower atmosphere since part of the re-radiation is directed towards the surface, resulting in increased surface temperature. The local radiation balance is also affected by clouds and aerosols in the atmosphere since they too can absorb and scatter radiation. The effects of clouds and greenhouse gases on the global radiative balance and surface temperature are well known. The aerosols, however, are one of the greatest sources of uncertainty in the interpretation and projection of the climate change. Natural aerosols such as those due to large eruptions of volcanoes and wind-blown mineral dust are recognised as significant sources of climate forcing. In addition, there are several ways in which humans are altering atmospheric aerosols. These include industrial emissions to the lower atmosphere as well as emissions to as high as lower stratosphere by aircraft. In this thesis the effect of aerosols on LW radiation was studied based on narrowband LW calculations in a reference mid-latitude summer atmosphere with and without aerosols. Aerosols were added to the narrowband LW scheme based on their typical schematic observed spectral and vertical behaviour over European land areas. This was found to agree also with spectral aerosol data from the Lan Zhou University Semi-Arid Climate Observatory and Laboratory measurement stations in north-western China. A volcanic stratospheric aerosol load was found to induce local LW warming with a stronger column “greenhouse effect” than a doubled CO2 concentration. A heavy near-surface aerosol load was found to increase the downwelling LW radiation to the surface and to reduce the outgoing LW radiation, acting very much like a thin low cloud in increasing the LW greenhouse effect of the atmosphere. The short wave reflection of white aerosol has, however, stronger impact in general, but the aerosol LW greenhouse effect is non-negligible under heavy aerosol loads.
  • Ylivinkka, Ilona (2019)
    Volatile organic compounds (VOCs) are hydrocarbons that are emitted to the atmosphere from biogenic or anthropogenic sources. Plants emit VOCs as a part of normal metabolism, but the emissions are significantly increased under stressed conditions. For example heat wave, drought and herbivory cause stress for the plants. Laboratory studies have shown that VOCs emitted by herbivory infested boreal forest trees have enhanced secondary organic aerosol (SOA) production. In this study, 25 years (1992–2016) of atmospheric data from measurement site in eastern Finnish Lapland was analyzed to understand wheter the enhancement is atmospherically relevant. The knowledge is important, as aerosol particles cause changes in radiative forcing, and thus contribute to the climate change. At the study site autumnal moth (Epirrita autumnata) larvae are prominent defoliator of mountain birches (Betula pubescens spp. czerepanovii). Autumnal moths have cyclic population dynamics, and during the severe population outbreaks, they can consume all the leaves of mountain birches in vast regions. Despite the severity of the herbivory to the local ecosystem, the analysis did not show connection between the number of autumnal moths and aerosol processes. Also, no clear correlation between the total number concentration and temperature, and hence the basal VOC emissions from biogenic sources, was observed. Nor did sulfur dioxide or sulfuric acid concentration have strong correlation with total particle concentration which would have been expected. The results indicate that probably the total biomass of mountain birches is too small to cause detectable changes in atmospheric variables. Additionally, the study period had only one severe population outbreak during which the data availability of atmospheric variables was limited. However, climate change proceeds fast in the Arctic region. Hence, the basal VOC emissions from vegetation will increase. Also, both the mountain birches and new moth species will expand to the areas where they did not earlier succeed. In the future the enhancement of autumnal moth larvae feeding may be atmospherically relevant.
  • Cardwell, Amanda (2017)
    Population growth and the conversion of forests to agricultural lands is a typical phenomenon in the highlands of East Africa. Land use changes set pressure on ecosystem services and natural resources, such as fresh water, which is essential for human well-being. Soil forms the largest free fresh water storage on planet. To maximize the potential of groundwater reservoirs in a continually changing environment it is extremely important to understand the factors controlling the infiltration process. The effect of land use on infiltration has been studied broadly but due to the increasing water scarcity and difficult geographical accessibility, the amount of infiltration studies conducted in the highlands of East Africa is low. The aim of this study was to examine whether land use affects infiltration in Taita Hills (3°25' S, 38°20' E), a tropical highland environment in southeastern Kenya, and whether the changes can be explained by the changes in soil properties. Another aim of this study was to examine whether the collected field infiltration data can be modelled with fixed infiltration models, which could potentially decrease the need for time and water consuming field infiltration measurements in future. This study focused on three land use classes: forests, cultivations and grazing lands. The collected field data consisted of field infiltration measurements (n=50) and corresponding soil organic carbon, soil organic nitrogen, soil bulk density, soil moisture and soil texture samples. The effect of land use on infiltration was examined with the one-way ANOVA and pairwise t-test. The relationship of soil properties and steady state infiltration rates was investigated with simple linear regression models. According to the results of this study infiltration rates vary by land use. The mean infiltration rates were 3900, 1700 and 450 mm/h in forests, cultivations and grazing lands, respectively, which correspond a decrease of 59 % and 88 % in infiltration rates when forests are converted to cultivations and grazing lands, respectively. According to the produced simple regression models soil bulk density explains 61.5 % and soil organic carbon and soil organic nitrogen 24.0 and 34.1 % of the variation of infiltration rates, respectively. Initial soil moisture explained 15.6 % of the variation but is believed to reflect the climatic and structural conditions of the soil instead of representing a direct impact on the infiltration process. The results suggest that soil texture does not explain the variation of infiltration, which is most likely due to the homogenous soil across the study area. Horton's infiltration model was found most suitable to model the infiltration process within the study area, although the overall performance of the models of Philip and Green and Ampt were also sufficient. The Modifield Kostiakov model was not found suitable to model the infiltration process of the study area.
  • Aino, Kaltiainen (2024)
    The planetary boundary layer (PBL) is a layer of the atmosphere directly influenced by the presence of Earth's surface. In addition to its importance to the weather and climate systems, it plays significant role in controlling the air pollution levels and low-level heat conditions, thereby directly influencing the general well-being. While the modification of the boundary layer conditions by varying atmospheric forcings has been widely studied and discussed, it remains unknown what the dominant states of the PBL variation in response to this modification are. In this study, the dominant boundary layer types in both daytime and nighttime layers are examined. To understand the factors contributing to the development of these layers, weather regimes in the northern Atlantic-European region are considered. Machine learning techniques are utilized to study both the boundary layer and the large-scale flow classes, with an emphasis on unsupervised learning methods. It was found that the boundary layers in Helsinki, Finland, can be categorized into four daytime and three nighttime boundary layers, each characterized by the dominant turbulence production mechanism or the absence thereof. During the daytime, layers driven by both mechanical and buoyant turbulence are observed in summer, autumn, and spring, while individually buoyancy-driven layers occur in summer and winter, and individually mechanically-driven layers emerge in autumn, winter, and spring. Additionally, a layer characterized by overall reduced turbulence production is present throughout all seasons. During the nighttime, all three boundary layer types---individually buoyancy-driven, individually mechanically-driven, and stable layer---are observed in all seasons. Each boundary layer type exhibits season-specific variations, whereas daytime and nighttime boundary layers driven by the same mechanisms reflect the diurnal cycle of their relative intensities. The analysis revealed that the weather regimes producing cyclonic and anticyclonic flow anomalies over southern Finland collectively influence the boundary layer conditions, whereas the impact of individual weather regimes remains relatively small. Large-scale flow variation is associated with changes in the boundary layer dynamics through alterations in surface radiation budget (cloudiness) and wind conditions, thereby influencing the relative intensities of mechanical and buoyant turbulence production. However, inconsistencies in the analysis suggest that additional mechanisms, such as mesoscale phenomena, must also contribute to the development of the observed boundary layer types.
  • Keskitalo, Reijo (Helsingin yliopistoUniversity of HelsinkiHelsingfors universitet, 2005)
    As a full-grown science, cosmology is relatively young. Even though man has pondered the existence and structure of the universe throughout his history, the lack of actual observational data has prevented analytical research. Observational cosmology can be seen to have born in the 1920’s when Edwin Hubble discovered that the galaxies surrounding us are receding in all directions. This led to the conclusion that the universe around us is itself actually expanding. Expansion occurring isotropically in all directions indicates that the universe was once much denser and hotter. So hot that the matter in it has been completely ionized plasma. The decrease in temperature caused by the expansion is calculated to have caused the neutralizing of the plasma, recombination, over thirteen billion years ago. The instant is cosmologically remarkable, since light that until that moment scattered frequently from the charged particles now began to propagate freely. Initially at three thousand Kelvin temperature, the radiation has cooled down due to expansion and is now observed as the three Kelvin cosmic microwave background radiation (CMB). First observations of the existence of the CMB date back to 1965. Since the background radiation has traveled its long journey relatively unchanged, its study can yield direct information on the conditions of the early universe. Theoretically it was expected, well before observational confirmation in 1992, that the CMB should have a structure that reflects those inhomogeneities, that have now undergone their ten billion years of evolution, to become the large scale structure we observe: galaxies, galaxy clusters and the evermore larger entities. In this thesis we examine, how the effects of two cosmological parameters, the matter and baryon densities of the universe, manifest in the pre-recombination dynamics and how these effects are reflected in the structure of the observed CMB anisotropy. Baryons are the “ordinary” matter all around us, protons and neutrons. The concept of “matter” is extended to include the unknown dark matter, the existence of which is only known through its gravitational effects. We will review the equations that are necessary to track the evolution of the primordial perturbations. By a computer program based on those equations we display how the early universe dynamics change with the values of the density parameters. Finally we will show how these effects are reflected in the angular power spectrum that describes the structure of the microwave background.
  • Lassila, Maria (2013)
    In the Arctic Ocean both the sea ice extent and the sea ice thickness have decreased dramatically during recent years. This has also most probably caused changes in airmass routes and lead to unusually negative Arctic Oscillation (AO) index especially in winter. It is likely that the aerosol production in the Arctic will increase with the declining sea ice cover. In this study we used a five-year of the aerosol size distribution measurements from the Zeppelin station in Ny-Ålesund, Svalbard. We compared trajectory-, sea ice- and Arctic Oscillation-index (AO-index) data to find out if aerosol size distribution properties over sea ice differ from properties over open sea and also if there are differences between negative AO-index (AO-) and positive AO-index (AO+). We divided the data into four sectors and three seasons (spring, summer, and autumn and winter). During autumn and winter the number concentration distribution is clearly different over open sea than over sea ice. However the sea ice concentration doesn't have an effect to the number concentration. The total number concentration is smaller over open sea (less than 60 cm-3) than over ice (range from 14 to 120 cm-3). During spring, the aerosol number concentration is dominated by the accumulation mode particles. During summer the Aitken mode number concentrations are higher than accumulation mode number concentrations. The lower the AO-index is the bigger the particles are. During summer the number concentration distribution is complete different than during autumn and winter. Differences between AO- and AO+ situations were small. Seasonality was clearly important. It is clear that sea ice has an effect to the aerosol size distribution properties, but it doesn't seem to matter what kind of ice concentration there is. Also the AO-index has an effect to the aerosol size distribution, but weather the AO and sea ice together have an effect to the aerosol properties is not so clear than the effect of the sea ice itself. In the future, when sea ice extent probably is decreasing the open sea situations are going to be more common than sea ice situations. Then it is possible that nucleation mode number concentration increases with the decreasing sea ice.
  • Karttunen, Sasu (2020)
    Air pollution is the most severe environmental problem in the world in terms of human health. The World Health Organisation (WHO) estimates that 91% of the world's population is exposed to high air pollutant levels. The risks are particularly high in urban areas, where often high population densities are combined with high air pollutant levels. Urban street canyons are especially prone to high pollutant levels due to the proximity of traffic and reduced exchange of air with the street canyon and air above, referred to as ventilation. As a result, one of the most important topics in city planning is how to avoid designs that impact the air quality negatively. Street trees are often planted in street canyons for aesthetic purposes while they can also improve thermal comfort. The air quality within street canyons is affected by street trees in two ways. They provide leaf surface for air pollutants to deposit on, thus cleaning the air. On the other hand, they block the airflow within the street canyon, thus decreasing the ventilation of air pollutants. In previous studies the latter effect has generally been found stronger. However, due to the various benefits of street trees, leaving them completely out from street canyon designs is rarely an option. The City of Helsinki is planning to develop its current inbound motorways into city boulevards which has raised concerns towards the local air quality levels due to high projected traffic rates. The aim of this study was to find which of five street-tree scenarios, realistic for the city boulevards, is the best in terms of air quality. Pedestrian-level aerosol mass concentrations were used as the measure of air quality. Furthermore the impacts of vegetation and dependency of aerosol mass concentrations on various flow statistics were studied in order to explain the differences between the scenarios. Large-eddy simulation (LES) model PALM was utilised to study the flow field above and within a city boulevard and to model the dispersion of traffic-related aerosols. Aerosol particles of different sizes were represented using a sectional aerosol model SALSA. The suitability of the used LES setup for such intercomparison studies was also investigated. The results showed that the street trees have generally a considerable negative impact (-2% to 54%) on pedestrian-level aerosol mass concentrations. Trees were find to reduce the mean wind speeds within the street canyon, which correlated strongly with the pedestrian-level concentrations. This was particular with a parallel wind direction to the street canyon due to decreased ventilation. Turbulence produced by the street trees was partially able to compensate for the reduced ventilation in some scenarios. The increased turbulence could be observed up to heights exceeding the maximum building height. Based on the results, it is recommended to prefer variable-height street-tree canopies over uniform ones within street canyons similar to the studied one. Uneven canopy increases turbulence and related pollutant transport which partially compensates decreased ventilation due to decreased wind speeds. It is also recommendable to consider minimising the ratio of the total crown volume to the street canyon volume, as ventilation decreases sharply as the ratio increases.
  • Cole, Elizabeth (2011)
    Thermal instability (hereafter TI) is investigated in numerical simulations to determine its effect on the growth and efficiency of the dynamo processes. The setup used is a three-dimensional periodic cube of a size several times the correlation length of the interstellar turbulence. The simulations are designed to model the interstellar medium without any shear or rotation, to isolate the effect of TI. Hydrodynamical and nonhelical simulations are run for comparison to determine the effects the magnetic field has upon the gas itself. Turbulence is simulated by external helical forcing of varying strength, which is known to create a large-scale dynamo of alpha squared-type. The nonhelical cases are also explored in an attempt to create a small-scale dynamo at high Rm, but no dynamo action could be detected in the range of Rm ~ 30 – 150. The hydrodynamical simulations reproduce the tendency of the gas to separate into two phases if an unstable cooling function is present. The critical magnetic Reynolds number of the large-scale dynamo was observed to be almost twice as large for the unstable versus stable cooling function, indicating that the dynamo is harder to excite when TI is present. The efficiency of the dynamo as measured by the ratio of magnetic to kinetic energy was found to increase for the unstable case at higher forcing. The results of the runs from this thesis are part of a larger project studying dynamo action in interstellar flows.
  • Ovaskainen, Osma (2024)
    Abstract Objective The objective of this thesis is to create methods to transform the most accessible digitalized version of an apartment, the floor plan, into a format that can be analyzed by statistical modeling and use the created data to find if there are any spatial or temporal effects in the geometry of apartments floor plans. Methods The first part of the thesis was created using a mix of computer vision image manipulation methods combined with text recognition. The second portion was performed using a oneway ANOVA model. Results With the computer vision portion, we were able to successfully classify a portion of the data, however, there is a lot of room for improvement due to the recognition had a lot of room for improvement. From the created data, we were able to identify some key differences concerning our parameters, location, and year of construction. The analysis however sufferers from a quite limited dataset, where few housing corporations play a large role in the final results, so it would be wise to repeat this experiment with a more comprehensive dataset for more accurate results
  • Tene, Idan (2024)
    Accurate forest height estimates are critical for environmental, ecological, and economical reasons. They are a crucial parameter for developing forest management responses to climate change and for sustainable forest management practices, and are a good covariate for estimating biomass, volume, and biodiversity, among others. With the increased availability of Light Detection and Ranging (LiDAR) data and high-resolution images (both satellite and aerial), it has become more common to estimate forest heights from the sensory fusion of these instruments. However, comparing recent advancements in height estimation methods is challenging due to the lack of a framework that considers the impact of varying data resolutions (which can range from 1 meter to 100 meters) used with techniques like convolutional neural networks (CNNs). In this work, we address this gap and explore how resolution affects error metrics in forest height estimations. We implement and replicate three state-of-the-art convolutional neural networks, and analyse how their error metrics change as a dependency of the input and target resolution. Our findings suggest that as resolution decreases, the error metrics appear to improve. We hypothesize that this improvement does not reflect a true increase in accuracy, but rather a fundamental shift in what the model is learning at lower resolutions. We identify a possible change point between 3 meter and 5 meter resolution, where estimating forest height potentially transitions to estimating overall forest structure.
  • Liljedahl, Lasse (2017)
    To understand the formation of disk galaxies it is also important to understand different feedback mechanisms that affect the formation process. Without a feedback process to delay star formation the disk galaxies should not have ongoing star formation in the present day Universe. However, this is not the case since star formation is still taking place. For example, in the Milky Way the star formation rate is still ~1 solar mass per year. Moreover, during the formation process most of the gas inside galaxies is not bound into stars. Instead when disk galaxies form inside a dark matter halo there is much more baryonic matter initially available in gaseous form than in stars. This contradicts the basic CDM model, according to which most of the gas should cool down and form stars in the absence of feedback. The goal of this thesis is to first introduce the theory behind disk galaxy formation and the feedback mechanisms affecting the galaxy formation process with the main focus being on the supernova feedback. After introducing the theory the aim is to compare how supernova feedback affects the formation of a massive Milky Way-like galaxy and a less massive dwarf galaxy using a simulation code developed by Efstathiou (2000). For both galaxies four cases are simulated. Two of them represent a basic galaxy formation model presented in this thesis. One observes a situation in which the galaxy would have a very high star formation efficiency and the second concentrates on a slightly refined model including some parameters, which are ignored in the basic model. The work conducted in this thesis proves that supernova feedback may work throughout the galaxy's lifetime and causes a significant portion of the gas to escape the galaxy. This also shows that supernova driven feedback might be a reason why disk galaxies in the present day Universe still have ongoing star formation. Also the analytic model is surprisingly realistic and produces results which not only explain why there still is star formation in the present day disk galaxies, but also why the stellar mass in disk galaxies is lower than what is predicted by the basic CDM model. In dwarf galaxies with circular speed 70 km/s the ejected gas mass may be up to 60% of the total initial gas mass and in a high star formation case the ejected gas mass may be equal to the final stellar mass. Dwarf galaxies are also more sensitive to changes in the initial parameters compared to massive galaxies. In more massive galaxies with circular speed 280 km/s the ejected gas mass is smaller, but still may be 20% of the total gas mass. Another result was that massive galaxies are not very sensitive to changes in the initial conditions and the effects of supernova feedback. Finally, in the massive galaxies gas may join a galactic fountain, which was not observed in the dwarf galaxies, in which the gas was lost.
  • Richard Eric, van Leeuwen (2023)
    Energy usage and efficiency is an important topic in the area of cloud computing. It is estimated that around 10% of the world’s energy consumption goes towards the global ICT system [1]. One key aspect of the cloud is virtualization, which allows for the isolation and distribution of system resources through the use of virtual machines. In recent years, container technology, which allows for the virtualization of individual processes, has become a popular virtualization technique. However, there is limited research into the scalability of these containers from both an energy efficiency and system performance perspective. This thesis aims to investigate this issue through large-scale benchmarking experiments. Results of the benchmarking experiments indicate that not necessarily the total amount of containers, but the assigned task of each individual container are relevant when considering energy efficiency. Key findings show a link between latency measurements performed by individual containers and allocated CPU cores on the host machine, with additional CPU cores causing a drop in latency as the amount of containers increase. Further, power consumption seems to hit its peak when CPU utilisation is only at 50%, with additional CPU utilisation causing no increase in power consumption. Finally, RAM utilisation seems to scale linearly with the total amount of containers involved.
  • Helle, Aino (2015)
    The seas and oceans are the scene of multiple human actions, all of which cause pressures on the marine environment. Marine spatial planning (MSP) systematizes the evaluations of the spatial impacts of the human actions and take into consideration the cumulative impacts of the actions. A probabilistic model is constructed to estimate the impacts of oil shipping and offshore wind power on 16 species. The quantitative indicators of impacts are the loss of breeding success of 5 birds, the loss of the early development stages of 3 fish species and the change in the probability of presence/absence of 3 benthic species and 5 algae. The thesis model works as an independent application, but can be merged as such into an MSP tool that works with a geoinformatic system (GIS) interface. The impacts of offshore wind power and oil shipping, and especially the possible oil spill, have been studied at other marine areas, but there are only few studies about their impacts in the brackish water conditions of the Baltic Sea. The study area of this thesis is the eastern Gulf of Finland (EGOF). The model is constructed using Bayesian networks (BNs) which are graphical probabilistic models. The most important human pressures caused by the actions are identified based on literature and placed in the model accordingly. The pressures caused by operational offshore wind power are the disturbance to birds and underwater noise. The pressures caused by oil shipping are underwater noise and the oil exposure of species after a possible oil spill. The attenuation of the pressures as a function of increasing distance from the source of pressure is calculated mathematically, where possible. Expert elicitation is conducted to fill in the gaps in existing data over the subject. Altogether 6 experts were interviewed and another two were consulted informally. The different types of data are integrated in the BN, which allows quantified comparisons between different management options and alternative scenarios. The model predicts that both human actions have negative impacts on the marine environment of the EGOF. The impacts of an offshore wind mill will realize without uncertainty but they will be negligible. An oil spill, on the other hand, is unlikely to happen, but if it does, the losses will be extensive. The disturbance of the wind mill on birds extends to some hundreds of metres from the mill, depending on the bird species. The losses of the early development stages of fish caused by the underwater noise of a wind mill are nearly certainly below 20% at all distances from the mill for all studied species. With the most likely sound pressure levels of tankers, the losses to the early development stages of the fish also remain below 20% with a high level of certainty at all distances. At these tanker noise levels, the harmless noise class of <90 dB re 1µPa will be reached at some kilometres of the fairway, depending on the original noise level from a tanker. Three alternative oil shipping scenarios for 2020 were compared. The differences among the scenarios are negligible both when it comes to the impacts of underwater noise on fish and to the probability of a species to get exposed to oil. The model successfully describes the impacts of the human pressures that are known to take place, such as the impacts of offshore wind power, but requires a GIS environment and drift models to be able to predict the probabilities of an oil exposure. The applicability of the model can be increased by taking into consideration additional human actions and a wider selection of human pressures. The thesis model is a part of a MSP tool produced in TOPCONS (Transboundary tools for the spatial planning and conservation of the Gulf of Finland) project, which is a prototype of a tool that can be later applied at marine areas worldwide.
  • Latif, Khalid (2023)
    The evolution of number systems, demonstrating the remarkable cognitive abilities of early humans, exemplifies the progress of civilization. Rooted in ancient Mesopotamia and Egypt, the origins of number systems and basic arithmetic trace back to tally marks, symbolic systems, and position-based representations. The development of these systems in ancient societies, driven by the needs of trade, administration, and science, showcases the sophistication of early mathematical thinking. While the Roman and Greek numeral systems emerged, they were not as sophisticated or efficient as their Mesopotamian and Egyptian counterparts. Greek or Hellenic culture, which preceded the Romans, played a crucial role in mathematics, but Europe's true impact emerged during the Middle Ages when it played a pivotal role in the development of algorithmic arithmetic. The adoption of Hindu-Arabic numerals, featuring a placeholder zero, marked a paradigm shift in arithmetic during the Middle Ages. This innovative system, with its simplicity and efficiency, revolutionized arithmetic and paved the way for advanced mathematical developments. European mathematicians, despite not being the primary innovators of number systems, contributed significantly to the development of algorithmic methods. Techniques such as division per galea, solutions for quadratic equations, and proportional reduction emerged, setting the foundation for revolutionary inventions like Pascal's mechanical calculator. Ancient mathematical constants such as zero, infinity, and pi played deeply influential roles in ancient arithmetic. Zero, initially perceived as nothing, became a crucial element in positional systems, enabling the representation of larger numbers and facilitating complex calculations. Infinity, a limitless concept, fascinated ancient mathematicians, leading to the exploration of methods to measure infinite sets. Pi, the mysterious ratio of a circle's circumference to its diameter, sparked fascination, resulting in ingenious methods to compute its value. The development of ancient computational devices further highlights the remarkable ingenuity of early mathematicians, laying the groundwork for future mathematical advancements. The abacus, with its ability to facilitate quick calculations, became essential in trade and administration. The Antikythera mechanism, a 2nd-century astronomical analog computer, showcased the engineering skill of ancient Greeks. Mechanical calculators like the slide rule and Pascaline, emerging during the Renaissance, represented significant developments in computational technology. These tools, driven by practical needs in commerce, astronomy, and mathematical computations, paved the way for future mathematical breakthroughs. In conclusion, the evolution of number systems and arithmetic is a fascinating narrative of human ingenuity and innovation. From ancient Mesopotamia to the Renaissance, this journey reflects the intertwined nature of mathematics, culture, and civilization.
  • Mandoda, Purvi (2022)
    Legumes and grains are grown worldwide, with the rise of consumption the importance of identification of metabolites like phenolic compounds within them are just as essential. Phenolic compounds are secondary metabolites with multiple beneficial properties such as antimicrobial, antioxidant and anti-inflammatory. Using Py-GC/MS (pyrolysis-gas chromatography/mass spectrometry) as a faster method of identification of phenolic compounds are the basis of this investigation. A total phenolic analysis using Folin-Ciocalteu analysis has taken place to determine the presence of phenolic compounds with the eight samples – wheat, barley, oats, pigeon pea, chickpea, fava beans, green peas, and potato peels. UPLC coupled with a PDA and FLR detector will be another instrument used to determine the types of phenolic compounds present in the eight samples. Py-GC/MS was able to identify compounds with the phenol moiety but not phenolic compounds of interest. The total phenolic content analysis was able to establish that phenolic compounds were present in all eight samples. Ferulic acid, gallic acid, vanillic acid and 3,4- dihydroxyphenylacetic acid were some of the phenolic compounds identified within the eight samples, using the UPLC chromatograms and measured standards.