Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Subject "simulation"

Sort by: Order: Results:

  • Page, Mathew (2021)
    Tiivistelmä – Referat – Abstract With rising income inequalities and increasing immigration in many European cities, residential segregation remains a key focus for city planners and policy makers. As changes in the socio-spatial configuration of cities result from the residential mobility of its residents, the basis on which this mobility occurs is an important factor in segregation dynamics. There are many macro conditions which can constrain residential choice and facilitate segregation, such as the structure and supply of housing, competition in real estate markets and legal and institutional forms of housing discrimination. However, segregation has also been shown to occur from the bottom-up, through the self-organisation of individual households who make decisions about where to live. Using simple theoretical models, Thomas Schelling demonstrated how individual residential choices can lead to unanticipated and unexpected segregation in a city, even when this is not explicitly desired by any households. Schelling’s models are based upon theories of social homophily, or social distance dynamics, whereby individuals are thought to cluster in social and physical space on the basis of shared social traits. Understanding this process poses challenges for traditional research methods as segregation dynamics exhibit many complex behaviours including interdependency, emergence and nonlinearity. In recent years, simulation has been turned to as one possible method of analysis. Despite this increased interest in simulation as a tool for segregation research, there have been few attempts to operationalise a geospatial model, using empirical data for a real urban area. This thesis contributes to research on the simulation of social phenomena by developing a geospatial agent-based model (ABM) of residential segregation from empirical population data for the Helsinki Metropolitan Area (HMA). The urban structure, population composition, density and socio-spatial distribution of the HMA is represented within the modelling environment. Whilst the operational parameters of the model remain highly simplified in order to make processes more transparent, it permits exploration of possible system behaviour by placing it in a manipulative form. Specifically, this study uses simulation to test whether individual preferences, based on social homophily, are capable of producing segregation in a theoretical system which is absent of discrimination and other factors which may constrain residential choice. Three different scenarios were conducted, corresponding to different preference structures and demands for co-group neighbours. Each scenario was simulated for three different potential sorting variables derived from the literature; socio-economic status (income), cultural capital (education level) and language groups (mother tongue). Segregation increases in all of the simulations, however there are considerable behavioural differences between the different scenarios and grouping variables. The results broadly support the idea that individual residential choices by households are capable of producing and maintaining segregation under the right theoretical conditions. As a relatively novel approach to segregation research, the components, processes, and parameters of the developed model are described in detail for transparency. Limitations of such an approach are addressed at length, and attention is given to methods of measuring and reporting on the evolution and results of the simulations. The potential and limitations of using simulation in segregation research is highlighted through this work.
  • Sukuvaara, Satumaaria (2023)
    Many beyond the Standard Model theories include a first order phase transition in the early universe. A phase transition of this kind is presumed to be able to source gravitational waves that might be be observed with future detectors, such as the Laser Interferometer Space Antenna. A first order phase transition from a symmetric (metastable) minimum to the broken (stable) one causes the nucleation of broken phase bubbles. These bubbles expand and then collide. It is of importance to examine how the bubbles collide in depth, as the events during the collision affect the gravitational wave spectrum. We assume the field to interact very weakly or not at all with the particle fluid in the early universe. The universe also experiences fluctuations due to thermal or quantum effects. We look into how these background fluctuations affect the field evolution and bubble collisions during the phase transition in O(N) scalar field theory. Specifically, we numerically simulate two colliding bubbles nucleated on top of the background fluctuations, with the field being a N-dimensional vector in the O(N) group. Due to the symmetries present, the system can be examined in cylindrical coordinates, lowering the number of simulated spatial dimensions. In this thesis, we perform the calculation of initial state fluctuations and simulate them and two bubbles numerically. We present results of the simulation of the field, concentrating on the effects of fluctuations on the O(N) scalar field theory.
  • Santillo, Jordan (2022)
    Research in radar technology requires readily accessible data from weather systems of varying properties. Lack of real-world data can delay or stop progress in development. Simulation aids this problem by providing data on demand. In this publication we present a new weather radar signal simulator. The algorithm produces raw time series data for a radar signal using physically based methodology with statistical techniques incorporated for computational efficiency. From a set of user-defined scatterer characteristics and radar system parameters, the simulator solves the radar range equation for individual, representative precipitation targets in a virtual weather cell. The model addresses the question of balancing utility and performance in simulating signal that contains all the essential weather information. For our applications, we focus on target velocity measurements. Signal is created with respect to the changing position of targets, leading to a discernable Doppler shift in frequency. We also show the operation of our simulator in generating signal using multiple pulse transmission schemes. First, we establish the theoretical basis for our algorithm. Then we demonstrate the simulator's capability for use in experimentation of advanced digital signal processing techniques and data acquisition, focusing on target motion. Finally, we discuss possible future developments of the simulator and their importance in application.
  • Virtanen, Jussi (2022)
    In the thesis we assess the ability of two different models to predict cash flows in private credit investment funds. Models are a stochastic type and a deterministic type which makes them quite different. The data that has been obtained for the analysis is divided in three subsamples. These subsamples are mature funds, liquidated funds and all funds. The data consists of 62 funds, subsample of mature funds 36 and subsample of liquidated funds 17 funds. Both of our models will be fitted for all subsamples. Parameters of the models are estimated with different techniques. The parameters of the Stochastic model are estimated with the conditional least squares method. The parameters of the Yale model are estimated with the numerical methods. After the estimation of the parameters, the values are explained in detail and their effect on the cash flows are investigated. This helps to understand what properties of the cash flows the models are able to capture. In addition, we assess to both models' ability to predict cash flows in the future. This is done by using the coefficient of determination, QQ-plots and comparison of predicted and observed cumulated cash flows. By using the coefficient of determination we try to explain how well the models explain the variation around the residuals of the observed and predicted values. With QQ-plots we try to determine if the values produced of the process follow the normal distribution. Finally, with the cumulated cash flows of contributions and distributions we try to determine if models are able to predict the cumulated committed capital and returns of the fund in a form of distributions. The results show that the Stochastic model performs better in its prediction of contributions and distributions. However, this is not the case for all the subsamples. The Yale model seems to do better in cumulated contributions of the subsample of the mature funds. Although, the flexibility of the Stochastic model is more suitable for different types of cash flows and subsamples. Therefore, it is suggested that the Stochastic model should be the model to be used in prediction and modelling of the private credit funds. It is harder to implement than the Yale model but it does provide more accurate results in its prediction.
  • Pursiainen, Tero (2013)
    The long-run average return on equities shows a sizable premium with respect to their relatively riskless alternatives, the short-run government bonds. The dominant explanation is that the excess return is compensation for rare but severe consumption disasters which result in heavy losses on equities. This thesis studies the plausibility of this explanation in a common theoretical framework. The consumption disasters hypothesis is studied in the conventional Lucas-tree model with two assets and with constant relative risk aversion preferences, captured by the power utility function. The thesis argues that this oft-used model is unable to account for the high premium, and a simulation experiment is conducted to find evidence for the argument. The consumption process is modelled by the threshold autoregressive process, which offers a simple and powerful way to describe the equity premium as a result of a peso problem. Two statistics, the arithmetic average and the standard deviation, are used to estimate the long-run average and the volatility of the returns. The simulated data is analyzed and compared to the real world financial market data. The results confirm that the potential for consumption disasters produces a lower equity premium than the case without disasters in the Lucas-tree model with power utility. The disaster potential lowers the average return on equity instead of increasing it. This result comes from the reciprocal connection between the coefficient of relative risk aversion and the elasticity of intertemporal substitution, and from the special nature of the equity asset, which is a claim on the consumption process itself. The risk-free asset remains unaffected by the disaster potential. The equity premium remains a puzzle in this framework. The advantage of the threshold autoregressive consumption process is to show this result with clarity. Breaking the link between aversion to risk and intertemporal substitution is indeed one possible direction to take. Changing the assumptions about expected consumption or about the equity asset might offer another way forward. Another form of utility or another model is needed if the equity premium is to be explained in financial markets that are free of frictions.
  • Rönkkö, Niko-Petteri (2020)
    In this thesis, I analyze the causes and consequences of the Asian Crisis 1997 and simulate it with Dynare. The model includes financial accelerator mechanism, which in part explains the dynamics and the magnitude of the crisis via balance sheet effects. I find that the major components of the crisis were highly similar to other crisis that had happened in other emerging economies: High levels of foreign-currency denominated debt, unsound financial regulation, and fixed exchange rates with skewed valuation. Even though this simulation do not specifically incorporated different exchange rate regimes into the simulation, the previous literature draw a clear conclusion that flexible exchange rates lessen the shock’s effects on the economy. Thailand, as well as other ASEAN-countries during the crisis, faced severe economic contraction as well as changes in political landscape: Due to the crisis, Thailand’s GDP contracted over 10 percent, the country lost almost a million jobs, and the stock exchange index fell 75 percent. In addition, the country underwent riots, resignation of ministers, and several political changes towards more democratic institutions, even though faced some backlash and re-entry of authoritarian figures later. As the crisis worsened, IMF collected a large rescue package that was given to ASEAN-countries with preconditioned austerity policies. The simulation with recalibrated parameter-values seems to be relatively accurate. The dynamics and the impact of the crisis is captured realistically with correct magnitudes. The financial accelerator mechanism accounts a large part of the shock’s impact on investment and companies net worth, but do not account much on overall decline in output.
  • Rönkkö, Niko-Petteri (2020)
    In this thesis, I analyze the causes and consequences of the Asian Crisis 1997 and simulate it with Dynare. The model includes financial accelerator mechanism, which in part explains the dynamics and the magnitude of the crisis via balance sheet effects. I find that the major components of the crisis were highly similar to other crisis that had happened in other emerging economies: High levels of foreign-currency denominated debt, unsound financial regulation, and fixed exchange rates with skewed valuation. Even though this simulation do not specifically incorporated different exchange rate regimes into the simulation, the previous literature draw a clear conclusion that flexible exchange rates lessen the shock’s effects on the economy. Thailand, as well as other ASEAN-countries during the crisis, faced severe economic contraction as well as changes in political landscape: Due to the crisis, Thailand’s GDP contracted over 10 percent, the country lost almost a million jobs, and the stock exchange index fell 75 percent. In addition, the country underwent riots, resignation of ministers, and several political changes towards more democratic institutions, even though faced some backlash and re-entry of authoritarian figures later. As the crisis worsened, IMF collected a large rescue package that was given to ASEAN-countries with preconditioned austerity policies. The simulation with recalibrated parameter-values seems to be relatively accurate. The dynamics and the impact of the crisis is captured realistically with correct magnitudes. The financial accelerator mechanism accounts a large part of the shock’s impact on investment and companies net worth, but do not account much on overall decline in output.
  • Tommiska, Oskari (2021)
    Työssäni tutkin mahdollisuutta käyttää akustista ajankääntömenetelmää (time-reversal) teollisen ultraäänipuhdistimen puhdistustehon kohdentamiseen. Akustisella ajankääntömenetelmällä pystytään kohdistamaan painekenttä takaisin alkuperäiseen pisteeseen, tallentamalla ko. pisteestä lähetetyt painesignaalit akustisilla antureilla (etusuunta) ja lähettämällä ne takaisin ajassa käännettyinä (takasuunta). Tässä työssä tutkitun kohdentamismenetelmän perusteena toimii elementtimenetelmällä toteutettu simulaatiomalli, jossa sekä ultraäänipuhdistin, että puhdistettava järjestelmä oli mallinnettu tarkasti. Simulaatiomallin avulla voitiin puhdistettavasta alueesta valita mielivaltainen piste johon halutaan kohdentaa puhdistustehoa. Simuloidun etusuuntaisen ajon tuloksena tuotetut signaalit tuotiin ulos mallista ja takasuuntainen ajo suoritettiin kokeellisessa ympäristössä käyttäen simuloituja signaaleja. Työssä esitetään vertailu simuloidun ja kokeellisen ajankääntömenetelmään perustuvan kohdentamisen tuloksista ja osoitetaan, että simuloiduilla signaaleilla on mahdollista kohdentaa akustista tehoa ennalta valittuun mielivaltaiseen pisteeseen. Lisäksi työssä esitetään analyysi anturien määrän vaikutuksesta kohdentamiskykyyn, tarkastellaan ultraäänipuhdistimen avaruudellista kohdentamiskykyä sekä vahvistetaan simulaatioissa tehdyn lineaarisen oletuksen paikkansapitävyys.
  • Pousi, Ilkka (2014)
    The Finnish Forest Center produces forest resource data for the use of land owners and the actors in the forestry sector. The data is produced mainly by means of airborne laser scanning (ALS), and it is managed in a nationwide Aarni- forest resource information system. The produced data also includes stand-specific proposals for harvesting and silvicultural treatment. These are usually generated by a simulation, which also provides suggestion for a year of the action. The collection of forest resource data is based on the Area-based approach (ABA). In the method, the forest charac-ters, such as tree attributes measured on field sample plots, are predicted to the whole invention area by the corre-sponding laser- and aerial photo features. Forest characters are predicted to the grid cells 16 x 16 meters in size. In Aarni, the treatment simulation is based on the averages of the tree attributes generalized from the grid cells to the stand. The method does not regard a possible within-stand variation in tree density, which may cause, for ex-ample, delayed thinning proposals especially for the stands with grouped trees. The main aims of this study were: 1. To create a new method in which, in addition to the tree attributes, the subsequent treatment and its timing were simulated to grid cells. After that, special decision rules were created to derive the treatment from the grid cells to the stand. 2. To compare the treatments derived with the decision rules with the normal Aarni simulation of 291 field-surveyed stands to determine which method is better. A related action was also taken: A relationship between within-stand variation and a timing of the simulated treat-ments was also surveyed. This was accomplished by comparing the deviation of tree attributes of the grid cells (e.g. basal-area) with the corresponding attributes of the stand. Presumption was that, particularly in the stands with grouped trees, the problem of delayed thinning could be reduced by using decision rules. The results suggest that the decision rule method gives slightly better results than Aarni simulation in the case of the timing of treatments. The method gave the best results in the young stands where the field treatment proposal was first thinning. The deviation of the basal area of trees in the grid cells appeared to be slightly larger than aver-age in the stands with a large variation in tree density. In these particular stands, the decision rules mostly derived better timing for thinning than normal Aarni simulation.
  • Lindholm, Heidi (2017)
    The purpose of this study is to explore learning experiences of sixth grade students in the Me & MyCity learning environment. The research task is approached through the criteria of meaningful learning, which have been used as a theoretical framework in a Finnish learning environment study, among others. Previous research has shown that criteria of meaningful learning can be found in different kinds of learning environments. The study focuses on what working life skills the students learn in the Me & MyCity working life and society simulation. Very little research has been conducted on Me & MyCity, so the study is much needed. Research on learning environments shows that understanding and studying the usefulness of different learning environments is necessary, since there are few studies available on the topic. The goal of this study is to generate new information about the Me & MyCity learning environment, and also about which working life skills it can help students learn. The results of this study can also be used, for example, in the development of Me & MyCity. The study was carried out as a case study. The data consists of thematic interviews of a class of students and a teacher from a school in Vantaa who visited Me & MyCity in the spring of 2016, and papers the students wrote (two per each student). Altogether there were thematic interviews of 19 students, 38 papers, and one thematic interview of a teacher. The data was analyzed deductively, using the criteria of meaningful learning and a framework of working life skills that was compiled for this study. The results show that all criteria of meaningful learning can be found in Me & MyCity. However, based on the research data, the criterion of constructive learning was fulfilled only to a small extent, so the learning environment of Me & MyCity could be developed to support students' reflection of their own learning more, for example. There is variation in how working life skills are learnt in Me & MyCity. According to the results, some working life skills were not learnt at all. These results can be applied, among other things, in the pedagogical material of Me & MyCity, and its development. The results can also be put to use in ordinary school teaching to consider how school work can support students in learning working life skills and how, for example, an authentic learning environment that supports learning can be built in a school environment. The results can also be applied to building a good learning environment that supports the learning of other skills and information as well.
  • Seppä, Riikka (2023)
    The purpose of this work is to investigate the scaling of ’t Hooft-Polyakov monopoles in the early universe. These monopoles are a general prediction of a grand unified theory phase transition in the early universe. Understanding the behavior of monopoles in the early universe is thus important. We tentatively find a scaling for monopole separation which predicts that the fraction of the universe’s energy in monopoles remains constant in the radiation era, regardless of initial monopole density. We perform lattice simulations on an expanding lattice with a cosmological background. We use the simplest fields which produce ’t Hooft-Polyakov monopoles, namely the SU(2) gauge fields and a Higgs field in the adjoint representation. We initialize the fields such that we can control the initial monopole density. At the beginning of the simulations, a damping phase is performed to suppress nonphysical fluctuations in the fields, which are remnants from the initialization. The fields are then evolved according to the discretized field equations. Among other things, the number of monopoles is counted periodically during the simulation. To extend the dynamical range of the runs, the Press-Spergel-Ryden method is used to first grow the monopole size before the main evolution phase. There are different ways to estimate the average separation between monopoles in a monopole network, as well as to estimate the root mean square velocity of the monopoles. We use these estimators to find out how the average separation and velocity evolve during the runs. To find the scaling solution of the system, we fit the separation estimate on a function of conformal time. This way we find that the average separation ξ depends on conformal time η as ξ ∝ η^(1/3) , which indicates that the monopole density scales in conformal time the same way as the critical energy density of the universe. We additionally find that the velocity measured with the velocity estimators depends on the separation as approximately v ∝ dξ/dη. It’s been shown that a possible grand unified phase transition would produce an abundance of ’t Hooft-Polyakov monopoles and that some of these would survive to the present day and begin to dominate the energy density of the universe. Our result seemingly disagrees with this prediction, though there are several reasons why the predictions might not be compatible with the model we simulate. For one, in our model the monopoles do not move with thermal velocities, unlike what most of the predictions assume happens in the early universe. Thus future work of simulations with thermal velocities added would be needed. Additionally we ran simulations only in the radiation dominated era of the universe. During the matter domination era, the monopoles might behave differently.
  • Papponen, Joni (2022)
    Imaging done with conventional microscopes is diffraction-limited, which sets a lower limit to the resolution. Features smaller than the resolution cannot be distinguished in images. This limit of the diffraction-limit can be overcome with different setups, such as with imaging through a dielectric microcylinder. With this setup it is possible to reach smaller resolution than with a diffraction-limited system, which is called super-resolution. Propagation of light can be modelled with various simulation methods, such as finite-difference time-domain and ray tracing methods. Finitedifference time-domain method simulates the light as waves which is useful for modelling the propagation of light accurately and take into account the interactions between different waves. Ray tracing method simulates the light as rays which requires approximations to the light’s behaviour. This means that some phenomena cannot be taken into account, which can affect the accuracy of the results. In this thesis the model for simulating super-resolution imaging with microcylinder is studied. The model utilizes the finite-difference timedomain method for modelling the near-field effects of the light propagating through the microcylinder and reflecting back from a sample. The reflected light is recorded on the simulation domain boundaries and a near-field-to-far-field transformation is performed to obtain the far-field corresponding to the recorded fields. The far-field is backward propagated to focus a virtual image of the sample, and the virtual image is then used in ray tracing simulation as a light source to focus it to a real image on a detector.
  • Lintuluoto, Adelina Eleonora (2021)
    At the Compact Muon Solenoid (CMS) experiment at CERN (European Organization for Nuclear Research), the building blocks of the Universe are investigated by analysing the observed final-state particles resulting from high-energy proton-proton collisions. However, direct detection of final-state quarks and gluons is not possible due to a phenomenon known as colour confinement. Instead, event properties with a close correspondence with their distributions are studied. These event properties are known as jets. Jets are central to particle physics analysis and our understanding of them, and hence of our Universe, is dependent upon our ability to accurately measure their energy. Unfortunately, current detector technology is imprecise, necessitating downstream correction of measurement discrepancies. To achieve this, the CMS experiment employs a sequential multi-step jet calibration process. The process is performed several times per year, and more often during periods of data collection. Automating the jet calibration would increase the efficiency of the CMS experiment. By automating the code execution, the workflow could be performed independently of the analyst. This in turn, would speed up the analysis and reduce analyst workload. In addition, automation facilitates higher levels of reproducibility. In this thesis, a novel method for automating the derivation of jet energy corrections from simulation is presented. To achieve automation, the methodology utilises declarative programming. The analyst is simply required to express what should be executed, and no longer needs to determine how to execute it. To successfully automate the computation of jet energy corrections, it is necessary to capture detailed information concerning both the computational steps and the computational environment. The former is achieved with a computational workflow, and the latter using container technology. This allows a portable and scalable workflow to be achieved, which is easy to maintain and compare to previous runs. The results of this thesis strongly suggest that capturing complex experimental particle physics analyses with declarative workflow languages is both achievable and advantageous. The productivity of the analyst was improved, and reproducibility facilitated. However, the method is not without its challenges. Declarative programming requires the analyst to think differently about the problem at hand. As a result there are some sociological challenges to methodological uptake. However, once the extensive benefits are understood, we anticipate widespread adoption of this approach.
  • Viita, Tapani (2013)
    In Finland grain has to handle that seeds will stay in good condition in storage. The most common method of preservation is drying. 11 % of energy consumption in a grain growing chain is used in drying. EU has set the aim to achieve 9 % energy saving by year 2016 compared to average energy consumption in years 2001-2005. Ministry of agriculture and forestry has started energy program in agriculture, which aims to energy saving in agriculture. The aim of this study was to find out by computer simulation how to get the best energy efficiency in grain drying in different conditions. In the study was made a series of simulations to find out is different adjustments needed in different conditions. By sensitivity analysis was found out, which variable (condition or adjustment) affects most to the drying process. To find out reliability of the simulator energy consumption and drying time results was compared between simulation and real dryings in Viikki’s research farm. The best energy efficiency was achieved when high drying air temperature, fast grain circulation and small amount of air were used. The grain drying process is very sensitive to drying air temperature, moisture of grain and amount of air. The process is quite sensitive to density of grain and outside temperature. The simulator givesreliable results for energy consumption when grain moisture is more than 17% (w.b.) and for drying time when grain moisture is lower than 17 %. By adjusting grain drying process it is possible to save remarkable amount of energy. It is important to harvest and dry grain as good conditions as possible. Also isimportant to use isolation in dryer and maintain the burner.