The particle size is one of the most important properties to know, as it has a great impact on the effects and fate of aerosol particles in the atmosphere, or inside our respiratory system. The hygroscopicity of aerosol particles, or their ability to absorb water, determines the size of the particles in different relative humidity (RH) conditions. Dry water-soluble salt particles can double their size in diameter at the RH of 90%, whereas soot particles and fresh organics experience little to no growth. By studying the hygroscopic growth of aerosol particles, we gain important knowledge on the particle size and phase state in varying RH conditions, chemical composition, and mixing state both external and internal.

This thesis is focused on measuring the hygroscopic properties of aerosol particles. Most of the hygroscopicity studies contained here were conducted using the volatility-hygroscopicity tandem differential mobility analyzer (VH-TDMA) that we built within our group in University of Helsinki. The main conclusions we arrived at are: 1) The VH-TDMA we built is indeed an accurate and versatile tool for aerosol hygroscopicity and volatility studies. It is capable of determining the external mixing state of aerosol particles (in terms of hygroscopicity and volatility) and a good indirect method for estimating the chemical composition of aerosol particles. 2) The hygroscopicity studies conducted at sub- and supersaturation conditions may have significantly different results when measuring organic aerosols. The hygroscopic growth measured in supersaturation may greatly overestimate the growth in subsaturation, which in turn overestimates the scattering and the cooling effect of aerosols on climate. 3) The lensing effect by refractive material on the surface of soot particles and its absorption enhancement may have been exaggerated in previous studies. Our field measurements showed an average enhancement of 6%, while previous estimates have been as high as 200%.

Lastly, one of the key points of this thesis is to promote the use of H-TDMA technique in the field of aerosol science. The technique has been mostly replaced by the use of cloud condensing nuclei counters (CCNC). H-TDMA technique is far more accurate and versa-tile, and, in my opinion, it is easier to measure in subsaturation and predict the outcome in supersaturation, than vice versa.

]]>As an example of the setting (1) in this thesis we will deal with the discrete nonlinear Schrödinger equation (DNLS) with random initial data and we will mainly focus on its applications concerning the study of transport coefficients in lattice systems. Since the seminal work by Green and Kubo in the mid 50s, when they discovered that transport coefficients for simple fluids can be obtained through a time integral over the respective total current correlation function, the mathematical physics community has been trying to rigorously validate these predictions and extend them also to solids. In particular, the main technical difficulty is to obtain at least a reliable asymptotic form of the time behaviour of the Green-Kubo correlation. To do this, one of the possible approaches is kinetic theory, a branch of the modern mathematical physics stemmed from the challenge of deriving the classical laws of thermodynamics from microscopic systems. Nowadays kinetic theory deals with models whose dynamics is transport dominated in the sense that typically the solutions to the kinetic equations, whose prototype is the Boltzmann equation, correspond to ballistic motion intercepted by collisions whose frequency is order one on the kinetic space-time scale.

Referring to the articles in the thesis by Roman numerals [I]-[V], in [I] and [II] we build some technical tools, namely Wick polynomials and their connection with cumulants, to pave the way towards the rigorous derivation of a kinetic equation called Boltzmann-Peierls equation from the DNLS model. The paper [III] can be contextualized in the same framework of kinetic predictions for transport coefficients. In particular, we consider the velocity flip model which belongs to the family (2) of our previous classification, since it consists of a particle chain with harmonic interaction and a stochastic term which flips the velocity of the particles. In [III] we perform a detailed study of the position-momentum correlation matrix via two diffeerent methods and we get an explicit formula for the thermal conductivity.

Moreover, in [IV] we consider the Lorentz model perturbed by an external magnetic field which can be categorized in the class (1): it is a gas of non interacting particles colliding with obstacles located at random positions in the plane. Here we show that under a suitable scaling limit the system is described by a kinetic equation where the magnetic field affects only the transport term, but not the collisions. Finally, in [IV] we studied a generalization of the famous Kardar-Parisi-Zhang (KPZ) equation which falls into the category (2) being a nonlinear stochastic partial differential equation driven by a space-time white noise. Spohn has recently introduced a generalized vector valued KPZ equation in the framework of nonlinear fluctuating hydrodynamics for anharmonic particle chains, a research field which is again strictly connected to the investigation of transport coefficients. The problem with the KPZ equation is that it is ill-posed. However, in 2013 Hairer succeded to give a rigorous mathematical meaning to the solution of the KPZ via an approximation scheme involving the renormalization of the nonlinear term by a formally infinite constant. In [V] we tackle a vector valued generalization of the KPZ and we prove local in time wellposedness by using a technique inspired by the so-called Wilsonian Renormalization Group.

]]>The oceanic exchanges through the Fram Strait as well as the water mass properties and the changes they undergo in the Fram Strait and its vicinity are studied from three decades of ship-based hydrographic observations collected from 1980-2010. The transports are estimated from geostrophic velocities. The main section, comprised of hydrographic stations, is located zonally at about 79 °N. For a few years of the observed period it is possible to combine the 79 °N section with a more northern section, or with a meridional section at the Greenwich meridian, to form quasi-closed boxes and to apply conservation constraints on them in order to estimate the transports through the Fram strait as well as the recirculation in the strait. In a similar way, zonal hydrographic sections in the Fram Strait and along 75 °N crossing the Greenland Sea are combined to study the exchanges between the Nordic Seas and the Fram Strait. The transport estimates are adjusted with drift estimates based on Argo floats in the Greenland Sea. The mean net volume transports through the Fram Strait are averaged from the various approaches and range from less than 1 Sv to about 3 Sv.

The heat loss to the atmosphere from the quasi-closed boxes both north and south of the Fram Strait section is estimated at about 10 TW. The net freshwater transport through the Fram Strait is estimated at 60-70 mSv southward. The insufficiently known northward transport of Arctic Intermediate Water (AIW) originating in the Nordic Seas is estimated using 2002 Oden expedition data. At the time of data collection, excess sulphur hexafluoride (SF6) was available, a tracer that besides a background anthropogenic origin derives from a mixing experiment in the Greenland Sea in 1996. The excess SF6 can be used to distinguish AIW from the upper Polar Deep Water originating in the Arctic Ocean. It is estimated that 0.5 Sv of AIW enters the Arctic Ocean.

The deep waters in the Nordic Seas and in the Arctic Ocean have become warmer and in the Greenland Sea also more saline during the three decades studied in this work. The temperature and salinity properties of the deep waters found in the Fram Strait from both Arctic Ocean and Greenland Sea origins have become similar and continue to do so. How these changes will affect the circulation patterns will be seen in the future.

]]>First, to better describe the present-day land cover in the regional climate model, we introduced an up-to-date and high-resolution land cover map to replace the inaccurate and outdated default land cover map for Fennoscandia. Second, in order to provide background information for future forest management actions for climate change mitigation, we studied the biogeophysical effects on the regional climate of peatland forestation, which has been the dominant land cover change in Finland over the last century. Moreover, climate variability can influence the land surface. Although drought is uncommon in northern Europe, an extreme drought occurred in the summer of 2006 in Finland, and induced visible drought symptoms in boreal forests. Thus, we assessed a set of drought indicators with drought impact data in boreal forests in Finland to indicate summer drought in boreal forests. Finally, the impacts of summer drought on water use efficiency of boreal Scots pine forests were studied to gain a deeper understanding of carbon and water dynamics in boreal forest ecosystems.

In summary, the key findings of this thesis include: 1) the updated land cover map led to a slight decrease in biases of the simulated climate conditions. It is expected that the model performance could be improved by further development in model physics. 2) Peatland forestation in Finland can induce a warming effect in the spring of up to 0.43 K and a slight cooling effect in the growing season of less than 0.1 K due to decreased surface albedo and increased evapotranspiration, respectively. Corresponding to spring warming, the snow clearance day was advanced by up to 5 days over a 15-year mean. 3) The soil moisture index SMI was the most capable of the assessed drought indicators in capturing the spatial extent of observed forest damage induced by the extreme drought in 2006 in Finland. Thus, a land surface model capable of reliable predictions of regional soil moisture is important in future drought predictions in the boreal zone. 4) The inherent water use efficiency (IWUE) showed an increase during drought at the ecosystem level, and IWUE was found to be more appropriate than the ecosystem water use efficiency (EWUE) in indicating the impacts of drought on ecosystem functioning. The combined effects of soil moisture drought and atmospheric drought on stomatal conductance have to be taken into account in land surface models at the global scale when simulating the drought effects on plant functioning.

]]>According to the current knowledge, the probability of radiation induced stochastic effects, which include cancer risk and risk of hereditary effects, increases linearly as a function of the radiation dose. The organ dose is a better quantity for estimating the patient specific risk than the effective dose, which is meant to be used only for populations, and it does not consider patient age or gender. Moreover, the tissue weighting factors that are used in the effective dose calculation are based on whole body irradiations, but in X-ray examinations only a part of the patient is exposed to radiation.

The phantoms used in medical dosimetry are either computational or physical, and computational phantoms are further divided into mathematical and voxel phantoms. Phantoms from simplified to as realistic as possible have been developed to simulate different targets, but the organ doses determined based on them can differ largely from the real organ doses of the patient. There are also standard and reference phantoms in use, which offer a dose estimate to a so called average patient. Due to the considerable variation within patient anatomies, the real dose might differ from the dose to a standard or reference phantom.

The aim of this thesis was to determine organ doses based on dose measurements and Monte Carlo simulations in four X-ray imaging modalities, including general radiography, CT, mammography and dental radiography. The effect of the patient and phantom thickness and radiation quality on the organ doses in a projection X-ray examination of the thorax was studied via Monte Carlo simulations by using both mathematical phantoms and patient CT images. The effect of the breast thickness on the mean glandular doses (MGDs) was determined based on measurements with phantoms of different thicknesses and collected diagnostic and screening data from patient examinations, and the radiation qualities used in patient and phantom exposures were studied. For fetal dose estimation, fetal dose conversion coefficients were determined based on phantom measurements in CT and dental radiography examinations. Additionally, the effect of lead shields on fetal and breast doses was determined in dental examinations.

The difference between Monte Carlo simulated organ doses in patients and mathematical phantoms was large, for the examined organs up to 55% in projection imaging. In mammographic examinations, the difference between MGDs calculated based on collected patient data and phantom measurements was up to 30%. In mammography, patient dose data cannot be replaced by phantom measurements. The properties and limitations of the phantoms must be known when they are used.

The estimation of the fetal dose based on conversion coefficients requires understanding about the cases where conversion coefficients are applicable. When used correctly, they provide a method for simple dose estimation, where the application specific dose quantity can be taken into account. The conversion coefficients determined in this thesis can be used to estimate the fetal dose in CT examination based on the volume-weighted CT dose index (CTDIvol), and in dental examinations based on the dose-area product (DAP).

In projection imaging, the lung and breast doses decreased as the patients anterior-posterior thickness increased, but in mammography, the MGDs increased as the compressed breast thickness increased. In CT examinations, the fetal dose remained almost constant in examination where the fetus was totally within the primary radiation beam. When the fetus was outside of the primary beam, the fetal dose increased exponentially with the decreasing distance of the fetus from the scan range. As a function of the half value layer (HVL), the conversion coefficients in the studied projection imaging examination were more convergent than as a function of the tube voltage. The HVL alone describes better the radiation quality than the tube voltage alone, which requires also the definition of the total filtration. In mammography, it is possible to irradiate a phantom and a patient with the same equivalent thickness with different radiation qualities when automatic exposure control is used.

Despite the relatively large shielding effect achieved with lead shielding in dental imaging, the fetal dose without lead shielding and the related exposure-induced increase in the risk of childhood cancer death are minimal (less than 10 µGy and 10-5 %), so there is no need for abdominal shielding. The exposure-induced increase in the risk of breast cancer death is of the same order of magnitude as the increase in the risk of childhood cancer death, so also breast shielding was considered irrelevant. Most important is that a clinically justified dental radiographic examination must never be avoided or postponed due to a pregnancy.

]]>About 2.6 million years ago, early humans perhaps accidently discovered that sharp stone flakes made it easier to cut the flesh from around bones. From sharp flakes to the first handaxes took hundreds of thousands of years, and the development was thus extremely slow. Alessandro Voltas invention of the voltaic pile (battery) in 1800 started a huge journey, and only one hundred years later humans had all the necessary means to start examining the Earths subsurface. Since then, the development has been rapid, resulting in numerous methods (e.g. magnetic, gravimetric, electromagnetic and seismic) and techniques to resolve the Earths treasures.

The theoretical basis for the radio imaging method was established long before the method was utilized for exploration purposes. RIM is a geotomographic electromagnetic method in which the transmitter and receivers are in different boreholes to delineate electric conductors between the boreholes. It is a frequency domain method and the continuous wave technique is usually utilized. One of the pioneers was L.G. Stolarczyk in the USA in the 1980s. In the former Soviet Union, interest in RIM was high in the late 2000s. Our present device is also Russian based. Furthermore, in South Africa and Australian, a considerable amount of effort has been invested in RIM. The RIM device is superficially examined. It is the essential part in our RIM system, referred to as electromagnetic radiofrequency echoing (EMRE). The idea behind the device is excellent. However, several poor solutions have been utilized in its construction. Many of them have possibly resulted from the lack of good electronic components. The overall electronic construction of the whole device is very complicated. At least two essential properties are lacking, namely circuits for measuring the input impedances of the antennas and the return loss to obtain the actual output power. Of course, the digitalization of data in the borehole receiver could give additional benefits in data handling. The measurements can be monitored in real time on a screen, thus allowing the operator to already gain initial insights into the subsurface geology at the site and also to modify the measurement plan if necessary. Even today, no practical forward modelling tool for examining the behaviour of electromagnetic waves in the Earths subsurface is available for the RIM environment, and interpretation is thus traditionally based on linear reconstruction techniques. Assuming low contrast and straight ray conditions can generally provide good and rapid results, even during the measurement session. Electrical resistive logging is usually one of the first methods used in a new borehole. Comparing the logging data with measured amplitude data can simply reveal the situations where a nearby and relatively limited conductive formation can mostly be responsible for the high attenuations levels between boreholes and can hence be taken into account in the interpretation. The transient electromagnetic method (TEM) functions in the time domain. TEM is also a short-range method and can very reliably reveal nearby conductors. Comparisons of RIM and TEM data from the ore district coincide well. These issues are considered in detail in Publication I.

The functioning of the antenna is highly dependent on the environment in which the antenna is placed. The primary task of the antenna is to radiate and receive electromagnetic energy, or the antenna is a transducer between the generator and the environment. A simple bare wire can serve as a diagnostic probe to detect conductors in the borehole vicinity. However, borehole antennas are generally highly insulated to prevent the leakage of current into the borehole, and at the same time the insulation reduces the sensitivity of the antenna current to the ambient medium, especially as the electric properties of the insulation and surrounding material differ significantly. However, monitoring of the input impedance of the antenna could help in estimating its effectiveness in the borehole. This property is lacking in the present device. The scattering parameter s11 defines the relationship between the reflected and incident voltage or it provides information on the impedance matching chain. The behaviour of impedance of the insulated antennas in the different borehole conditions were estimated using simple analytical methods, such as the models of Wu, King and Giri (WKG) and Chen and Warne (CHEN), and highly sophisticated numerical software such as FEKO from EM Software and Systems (Altair). According to the results, our antennas maintain their effectiveness and feasibility in the whole frequency band (312.5−2500 kHz) utilized by the device. However, the highest frequency (2500 kHz) may suffer from different ambient conditions. The resolution is closely related to the frequency, whereby higher frequencies result in better resolution but at the expense of the range. These issues are clarified in Publication II.

Electromagnetic methods are based on the fact that earth materials may have large contrasts in their electrical properties. A geotomographic RIM survey can have several benefits over ground-level EM sounding methods. When the transmitter is in the borehole, boundary effects due to the ground surface and the strong attenuation emerging from soils are easily eliminated. A borehole survey also brings the survey closer to the targets, and higher frequencies can be used, which means better resolution. Viewing of the target from different angles and directions also means better reconstruction results. The fundamental principles of the electromagnetic fields are explained to distinguish diffusive movement (strongly attenuating propagation) from wave propagation and to give a good conception of the possible transillumination depths of RIM. The transillumination depths of up to 1000 m are possible in a highly resistive environment using the lowest measurement frequency (312.5 kHz). In this context, one interesting and challenging case study is also presented from the area for a repository of spent nuclear fuel in Finland. The task was to examine the usefulness of RIM in the area and to determine how well the apparent resistivity could be associated with the structural integrity of the rock. The measurements were successful and the results convinced us of the potential of RIM. Publication III is related to these issues.

In Finland, active use of RIM started in 2005 when Russian RIM experts jointly with GTK carried out RIM measurements at Olkiluoto. The results are presented in Publication IV. In this pioneering work, extensive background information (e.g. versatile geophysical borehole logging, optical imaging, 3D vertical seismic profile (VSP) and single-hole radar reflection measurements) was available from the site. The comparability of the results was good, e.g. low resistive or highly attenuating areas near boreholes from the RIM measurements coincided well with resistive logging and radar results. Electric mise-á-la-masse and high frequency electromagnetic RIM displayed even better comparability. The comparability of the surface electromagnetic sounding data and the RIM data was good. However, the tomographic reconstruction is much more detailed. In overall conclusion, the attenuation measurements were well suited to the recording of subsurface resistivity properties and continuity information between boreholes at Olkiluoto. To date, we have utilized RIM in two quite different environments. Olkiluoto is a spent nuclear fuel area in Finland with solid crystalline bedrock and Pyhäsalmi is an ore district with massive sulphide deposit. Despite Pyhäsalmi being an ideal research target for RIM, the utilization of the method has proven successful in both cases.

]]>In the first article of the thesis, we derive estimates for the essential and weak essential norms of a Volterra-type operator in terms of its symbol when the operator is acting on the Hardy spaces, BMOA and VMOA. The essential and weak essential norms of a linear operator are its distances from compact and weakly compact operators respectively. In particular, it follows from our estimates that the compactness and weak compactness of Volterra-type operator coincide when its domain is the non-reflexive Hardy space, BMOA, or VMOA.

In the second article, a notion of strict singularity of a linear operator is investigated in the case of the Volterra-type operator acting on the Hardy spaces. An operator between Banach spaces is strictly singular if its restriction to any closed infinite-dimensional subspace is not a linear isomorphism onto its range. We construct an isomorphic copy M of the sequence space of p-summable sequences and show that a non-compact Volterra-type operator restricted to M is a linear isomorphism onto its range. This implies that the strict singularity and compactness of this operator coincide in the Hardy space case.

In the third article, we provide estimates for the operator norms and essential norms of the Volterra-type operator acting between weighted Bergman spaces, where the weight function satisfies a doubling condition.

]]>