Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Title

Sort by: Order: Results:

  • Zheng, Ruoxin (2024)
    Carbon Fiber-Reinforced Polymer (CFRP) composites are widely employed in industrial sectors including aerospace, automotive, maritime and sports due to their stiffness and lightweight properties. However, these materials also vulnerable from impacts or long-term use, which can result in various damages that may influence their expactation lifetime and safety. X-ray micro tomography provides three-dimentional reconstruction with high resolution images, making it an ideal method for damage detection among traditional non-destructive testing (NDT) methods. To ultilize X-ray micro-CT images, this work explores the potential of deep learning-based object detection methods, with a particular focus on the YOLOv8 model developed by Ultralytics with transfer learning. The use of a highly complex pre-trained model with a limited annotated dataset of 60 images presents high precision and recall rates. Furthermore, the evaluation on different dataset sizes proves that 5\% of annotated images from entire sample is sufficient for training a reliable model. The predicted damages on the entire sample with quantitative information reveals the potential for impact level evaluation with the location and distribution of damages. The results showed not only the successful implementation of cutting-edge deep learning methods on material science and X-ray microtomography, but also established a foundation for AI-based automation in industrial applications such as quality control and lifetime monitor.
  • Matakos, Alexandros (2024)
    This thesis presents DeepGT, a 3D Convolutional Neural Network designed to enhance the spatial resolution of GNSS Tropospheric Tomography, a technique for estimating atmospheric water vapor distribution using GNSS signals. By utilizing Slant Wet Delays from dense GNSS networks and boundary meteorological data from Numerical Weather Prediction models, DeepGT refines low-resolution tomographic wet refractivity fields. The proposed method quadruples the horizontal resolution, while improving the accuracy of the tomographic reconstruction. Two experiments are conducted to validate this: one with real-world SWEPOS data and another with a hypothetical dense GNSS network. The results demonstrate the potential of deep learning models such as DeepGT in enhancing GNSS Meteorology, with implications for improved weather forecasting and climate studies.
  • Maljanen, Katri (2021)
    Cancer is a leading cause of death worldwide. Unlike its name would suggest, cancer is not a single disease. It is a group of diseases that arises from the expansion of a somatic cell clone. This expansion is thought to be a result of mutations that confer a selective advantage to the cell clone. These mutations that are advantageous to cells that result in their proliferation and escape of normal cell constraints are called driver mutations. The genes that contain driver mutations are known as driver genes. Studying these mutations and genes is important for understanding how cancer forms and evolves. Various methods have been developed that can discover these mutations and genes. This thesis focuses on a method called Deep Mutation Modelling, a deep learning based approach to predicting the probability of mutations. Deep Mutation Modelling’s output probabilities offer the possibility of creating sample and cancer type specific probability scores for mutations that reflect the pathogenicity of the mutations. Most methods in the past have made scores that are the same for all cancer types. Deep Mutation Modelling offers the opportunity to make a more personalised score. The main objectives of this thesis were to examine the Deep Mutation Modelling output as it was unknown what kind of features it has, see how the output compares against other scoring methods and how the probabilities work in mutation hotspots. Lastly, could the probabilities be used in a common driver gene discovery method. Overall, the goal was to see if Deep Mutation Modelling works and if it is competitive with other known methods. The findings indicate that Deep Mutation Modelling works in predicting driver mutations, but that it does not have sufficient power to do this reliably and requires further improvements.
  • Qian, Yuchen (2013)
    Collaborative applications like online markets, social network communities, and P2P file sharing sites are in popular use these days. However, in an environment where entities have had no interaction before, it is difficult to make trust decisions on a new transacting partner. To predict the quality of further interactions, we need a reputation system to help establish trust relationships. Meanwhile, motivated by financial profit or personal gain, attackers emerge to manipulate the reputation systems. An attacker may aim to slander others, promote oneself, or undermine the whole reputation system. Vulnerable components in a reputation system might bring in potential threats which might be taken advantage of by attackers. In order to give an accurate reputation estimate and better user satisfaction, a reputation system should properly reflect the behavior of the participants and should be difficult to manipulate. To resist attacks, there are various defense mechanisms, for example, mitigating the generation and spreading of false rumors, reasonably assigning an initial reputation value to newcomers, and gradually discounting old behavior. However, each defense mechanism has limitations. There is no perfect defense mechanism which can resist all attacks in all environments without any trade-offs. As a result, to make a reputation system more robust, we need to analyze its vulnerabilities and limitations, and then utilize the corresponding defense mechanisms into it. This thesis conducts a literature survey on reputation systems, inherent vulnerabilities, different kinds of attack scenarios, and defense mechanisms. It discusses the evolution of attacks and defense mechanisms, evaluates various defense mechanisms, and proposes suggestions on how to utilize defense mechanisms into reputation systems.
  • Takanen, Emilia (2023)
    Työssä todistetaan Delignen–Mumfordin kompaktifikaatiolause. Delignen–Mumfordin kompaktifikaatiolause sanoo, että tietyillä ehdolla jonolle saman signaturen hyperbolisia pintoja on olemassa rajapinta, josta on olemassa diffeomorfismit jokaiseen tämän jonon jäseneen ja että näillä diffeomorfismeilla nykäistyt metriikat suppenevat rajapinnalla rajametriikkaan. Työssä Delignen–Mumfordin kompaktifikaatiolause todistetaan osoittamalla vastaava tulos ensin hyperbolisten pintojen yksinkertaisille rakennuspalikoille. Työ todistaa vastaavan tuloksen ensin Y -palan kanonisille kauluksille ja käyttää tätä tulosta todistamaan vastaavan tuloksen Y -paloille. Tämän jälkeen työ todistaa jokaisen hyperbolisen pinnan olevan rakennettavissa Y -paloista antaen välittömästi tuloksen kaikille hyperbolisille pinnoille.
  • Helander, Jenni (2015)
    Muodostaakseen värikuvan digitaalikamera tarvitsee kuhunkin kuvan pikseliin tiedon kolmesta väristä: punaisesta, vihreästä ja sinisestä. Tavallinen digitaalikamera ei kuitenkaan mittaa jokaisen mainitun värin numeerista arvoa jokaiseen pikseliin, vaan vain yhden näistä. Demosaicing-algoritmit ovat algoritmeja, jotka käyttävät näitä kameran mittaamia vajaita väritietoja arvioidakseen puuttuvat tiedot väreistä kuhunkin pikseliin. Tämän tutkielman tarkoituksena on esitellä muutama tällainen demosaicing-algoritmi ja verrata näiden algoritmien tuottamia tuloksia keskenään. Tutkielmassa esitellään ensin itse aiheen ymmärtämistä varten tarvittava taustateoria. Tämä tapahtuu luvussa kaksi, jossa ensin määritellään kuva ja siihen liittyvää termistöä, esitellään kaksi väriavaruutta: RGB-väriavaruus ja CIELAB-väriavaruus sekä, miten siirtyminen RGB-väriavaruudesta CIELAB-väriavaruuteen tapahtuu. Väriavaruuksien jälkeen luvussa kaksi perehdytään hieman digitaalikameran toimintaan ja siihen, miten digitaalikamera eroaa perinteisestä filmikamerasta. Filmikamera muodostaa kuvan kuvattavasta kohteesta filmille, mutta digitaalikamerassa ei käytetä vanhanaikaista filmirullaa tai -paperia, vaan kuva muodostetaan elektronisesti CCD-kennoon tai CMOS-kennoon, jotka ovat tutkielmassa seuraavana esittelyvuorossa. Niin CCD- kuin CMOS-kenno ovat kumpikin värisokeita kuvanmuodostukseen käytettäviä komponentteja. Jotta otettavasta valokuvasta saataisiin värillinen, täytyy käytössä olevan kennon eteen asettaa värisuodatin. Tällaisista värisuodattimista esitellään yleisessä käytössä oleva Bayer-suodatin. Viimeiseksi luvussa kaksi esitellään vielä Fourier-muunnos, konvoluutio ja SSIM. Luvussa kolme esitellään kolme eri demosaicing-algoritmia: bilineaarinen interpolaatio, gradienttikorjattu bilineaarinen interpolaatio ja homogeeniohjautuva demosaicing-algoritmi. Luvussa neljä esitellään tutkielmassa käytettävä aineisto, jona toimii kaksi itse otettua valokuvaa. Valokuvat otettiin kameralla, joka mittaa jokaisen värikuvan muodostamiseen tarvittavan värin kussakin kuvan pikselissä. Näin ollen demosaicing-algoritmeilla saaduilla värikuvilla on vertailukohde, joka on samanaikaisesti algoritmien tavoitekuva. Luvussa viisi esitellään tutkielmassa käytetyillä algoritmeilla saadut tulokset tutkielman aineistolle ja luvussa kuusi tehdään johtopäätöksiä saaduista tuloksista. Tulokset ovat jopa yllättäviä. Kaikki esitellyistä algoritmeista tuottavat hyviä tuloksia, mutta mikään niistä ei päädy olemaan paras tai huonoin. Algoritmit näyttävät suoriutuvan eri tilanteissa erilailla. Mikäli käsiteltävät kuvat ovat tarpeeksi suuria, vaikuttaisi bilineaarinen interpolaatio toimivan parhaiten. Mikäli käsiteltävät kuvat ovat pieniä ja reunojen terävyydelle on tarvetta, on gradienttikorjattu bilineaarinen interpolaatio hyvä valinta. Jos käsiteltävät kuvat ovat pieniä sekä halutaan, että kuvassa on mahdollisimman vähän värihäiriöitä, tällöin puolestaan homogeeniohjautuva demosaicing-algoritmi on toimiva valinta.
  • Viding, Jasu (2020)
    A cluster of containerized workloads is a complex system where stacked layers of plugins and interfaces can quickly hide what’s actually going on under the hood. This can result in incorrect assumptions, security incidents, and other disasters. With a networking viewpoint, this paper dives into the Linux networking subsystem to demystify how container networks are built on Linux systems. This knowledge of "how" allows then to understand the different networking features of Kubernetes, Docker, or any other containerization solution developed in the future.
  • Narayanasamy, Shaman (2012)
    Cyanobacteria are ancient photosynthetic microorganisms found in both fresh and saline water bodies all over the world. Anabaena is a genus of filamentous heterocystous diazotrophic cyanobacteria that are common in freshwater lakes and often implicated in the formation of blooms. They are known to play a vital role in the nitrogen cycle and to produce harmful toxins. The reason for this toxic producing nature is still unknown. The Anabaena sp. strain 37, isolated from lake Sääksjärvi, western Finland was found to produce the neurotoxin, anatoxin-a which affects the nervous systems of humans and animals, capable of causing paralysis. During the past decade, genome sequencing has aided in the understanding of genetic information in many organisms including cyanobacteria. A whole genome sequencing project was carried out to understand the mechanism of anatoxin-a production in the Anabaena sp. strain 37. The 454 pyrosequencing produced 258,430 reads with a coverage of approximately 22X. The data was subjected to a de novo assembly which produced a draft genome, made up of 828 contigs above 500 bp, an N50 contig of 10,548 bp and a longest contig of 47,660 bp. The draft assembly underwent a finishing procedure which included scaffolding, gap closure and error correction. Two types of mate pair libraries; 3 Kb and 8 Kb were constructed and sequenced for scaffolding. The scaffolding using 196,221 of 3 Kb mate pair reads yielded 31 major scaffolds with an N50 scaffold of 344,872 bp. A second scaffolding using 34,498, 8 Kb mate pair reads resulted in 16 scaffolds, and an N50 scaffold of 1,085,340 bp. Three automated gap closure rounds were carried out using consed autofinish. The primers amplified the genomic DNA with PCR and the products were sequenced using Sanger sequencing. A total of 1,406 Sanger reads were used to closed more than 800 gaps in the draft assembly. In addition, the 454-based draft assembly contained many sequencing errors among single nucleotide homopolymeric regions of three-mers and above. Moreover, these errors were found in coding regions, namely the anatoxin-a synthetase gene cluster and was further confirmed with additional PCR and Sanger sequencing. There were 370,648 single nucleotide homopolymer sites of three mers and above that accounted for 38.18% of the genome length and a density of 668.1 per 10 Kb. A correction procedure was carried out by incorporating 100X coverage Illumina/Solexa data into the assembly. The high depth data corrected an estimated 1,888 single nucleotide homopolymer error sites of three-mers and above which translates to a 454 single nucleotide homopolymer error rate of 0.51% or 3.37 per 10 Kb. The correction also increased the overall quality of the Q20. The current assembly is made up of 14 scaffolds out of which six are major scaffolds. The assembly has an N50 scaffold of 1,085,340 bp where 99.7% of the consensus bases are of phred Q20 bases and an overall error rate of 8.21 per 10 Kb. Finally, the genome has a GC-content of 38.3% with four ribosomal RNA operons and the anatoxin-a synthetase gene cluster confirmed.
  • Taubert, Stefan (Helsingin yliopistoUniversity of HelsinkiHelsingfors universitet, 2004)
  • Soini, Assi-Johanna (2017)
    Comparing meteorite densities with the densities of small solar system bodies provides clues to the nature of asteroid interiors, especially accretional and collisional processes of asteroids, which reflects the evolution of the early solar nebula. Bjurböle is a L/LL4 ordinary chondrite. Bjurböle meteorites have high friability and porosity compared to other ordinary chondrites. Bjurböle meteorites are compositionally homogeneous and any density variations ascribe their internal structure. In addition, Bjurböle meteorite shower consists of numerous recovered meteorites, thus sampling a large volume of Bjurböle meteoroid. Volumes of ten Bjurböle meteorites ranging in mass from 17.27 g to 13.48 kg were determined using non-contaminating and non-destructive 3D laser scanner and pycnometer. Masses were determined using different scales. Densities were calculated based on the volumes, and porosities were derived from the acquired densities. No trend in density and porosity as a function of meteorite mass was found. Absence of a trend in Bjurböle meteorites can be interpreted based on distribution of strength and porosity within the parent meteoroid body. It suggests that density and porosity are inhomogeneously distributed within parent body and weaker parts are fragmented and disintegrated during atmospheric entry. Only the parts above certain strength survive, and their sizes vary within the parent body forming meteorites ranging in size from grams to tens of kilograms.
  • Dimitrova, Maria (2016)
    Ionic liquids are chemical compounds with low symmetry, which is manifested by the existence of the liquid phase below room temperature. A common class of ionic liquids is based on the imidazolium cation and an inorganic anion. The specific structure gives rise to some peculiar properties, including low vapour pressure, thermal and chemical stability, electrical conductivity, catalytic activity, and good solvation ability for both polar and non-polar compounds. The complex non-covalent interactions between the ions give rise to an internal structure with specific distribution of the polar and non-polar moieties. Of particular interest is the cage-like structure suggested by 129Xe NMR spectroscopy, and confirmed by molecular dynamics simulations, as small molecules or noble gas atoms can be embedded in these cavities. Computational studies on ionic liquids can be performed at different levels of theory using a multiscale approach. Molecular dynamics can give the distribution of ion pairs in the bulk structure. Density functional theory allows evaluations of the intermolecular interactions in small clusters. High-level ab initio methods are suitable for calculating thermodynamic properties and interaction energies. In this work, the ionic liquid 1-butyl-3-methylimidazolium chloride and its interactions with xenon have been investigated using density functional theory calculations. Studies on an isolated pair provided geometrical parameters, and revealed a favourable interaction with a xenon atom. The calculation on a system consisting of four ion pairs showed that the properties of ionic liquids have to be investigated on larger systems in order to avoid artificial interactions. A cluster consisting of 32 ion pairs was optimized at the PBEh-3c/def2-mSVP level of theory. The interaction energy with xenon was found to be 5.4 kcal/mol, which confirms the experimentally observed ability of imidazolium-based ionic liquids to dissolve the noble gas.
  • Terletskaia, Mariia (2023)
    In recent decades, more and more attention has been paid to solar energy because of the need to ensure “green” and sustainable future. Solar cells have been treated as one of the most promising technologies for solar energy utilization. Since conversion of sunlight into electricity mainly passes through the light absorbing material (absorber), its optoelectronic properties largely determine the cell performance. Among the existing absorbers, inorganic lead-free perovskites, like CsSnI3, are of great interest due to high potential efficiency, increased stability and the absence of toxic components. However, currently used fabrication techniques limit quality of the materials and their application in large-scale production. Atomic layer deposition (ALD) is a thin film fabrication technique which is now widely used in electronics and optoelectronics. Based on the principle of sequential saturated surface reactions, it is able to provide almost atomic level control over the thickness and composition of the film. Moreover, the principle ensures the formation of uniform films on large surfaces. Since precise composition control and scalability are of great importance for efficiency of perovskite solar cells, ALD acts as an excellent tool for production of this type of absorbers. The literature review of this thesis examines perovskites as absorber material for commercially efficient solar cells. The aim is to give the reader an overview of solar cell performance, currently available absorber materials and motivation for perovskites to become promising cost-efficient solution. Additionally, the most common fabrication techniques for perovskite structures are introduced together with limitations to emphasize the expediency of further experiments. The experimental part combines development of SnI2 thin film deposition in ALD reactor with a subsequent use of the technique in conversion to perovskite for future solar cell application. Unfortunately, the applicability of SnI2 ALD with proposed chemical process became doubted due to multitude issues that arose during the investigation. However, successful results on SnI2 pulsed chemical vapor deposition (pCVD) in the same ALD reactor supported feasibility of the chemical process. Application of the optimized pCVD technique for the conversion of CsI thin films, prepared by ALD, made it possible to obtain phase-pure CsSnI3 perovskite. In addition, conversion part demonstrates that use of SnI2 pCVD allows the formation of uniform and conformal perovskite thin films with promising band gap of 1.7 eV.
  • Miinalainen, Lumi (2024)
    Sileät monistot laajentavat matemaattisen analyysin keinoja euklidisista avaruuksista yleisemmille topologisille avaruuksille. De Rhamin lause lisää tähän vielä yhteyden algebralliseen topologiaan näyttämällä, että tietyt monistojen topologiset invariantit voidaan karakterisoida joko analyysin tai topologian keinoin. Toisin sanottuna moniston analyyttiset ominaisuudet paljastavat jotain sen topologisista ominaisuuksista ja päinvastoin. Tässä gradussa esitetään De Rhamin lauseelle kaksi todistusta. Ensimmäinen niistä todistaa lauseen sen klassisessa muodossa, joka vaatii vain monistojen ja singulaarihomologian perusteorian ymmärtämisen. Toinen todistus on muotoiltu hyvin yleisesti lyhteiden avulla; tarvittava lyhteiden teoria esitellään lähes kokonaan tekstissä. Tämä rakenne jakaa tekstin luontevasti kahtia. Ensimmäisessä osassa kerrataan ensin lyhyesti de Rhamin kohomologian ja singulaarihomologian perusteet. Seuraavaksi esitellään singulaarikohomologia sekä ketjujen integrointi monistoilla, jotka johtavat klassisen de Rhamin lauseen todistukseen. Toisessa osassa tutustutaan aluksi esilyhteiden ja lyhteiden teoriaan. Sitten esitellään lyhdekoho- mologiateoriat ja niiden yhteys ensimmäisen osan kohomologiaryhmiin. Lopulta näytetään, että kaikki lyhdekohomologiateoriat ovat yksikäsitteisesti isomorfisia keskenään. De Rhamin kohomologian ja singulaarikohomologian tapauksessa tälle isomorfismille annetaan lisäksi suoraviivainen konstruktio.
  • Tiihonen, Viivi (2021)
    Tämä maisterintutkielma pyrkii havainnollistamaan lukion lyhyen ja pitkän matematiikan opiskelijoiden osaamista valtakunnallisissa ylioppilaskirjoituksissa. Aiheeseen paneudutaan analyysitason funktion derivaatan määrittelyn kautta ja lisäksi sivutaan lyhyesti lukion opetussuunnitelman perusteita vuosilta 2015 ja 2019. Tarkastelun myötä huomataan, että lukion lyhyen matematiikan derivaattakurssin funktion derivaatan määritelmä jää melko kauas tarkasta määritelmästä, kun taas pitkässä matematiikassa päästään hyvin lähelle todellisuutta. Lukiossa saadun opetuksen havainnollistamiseksi tehdään lyhyt oppikirjatarkastelu sekä lyhyen että pitkän matematiikan oppikirjoista. Tutkielmassa käydään laajasti ja perustavanlaatuisesti läpi viimeisen viiden vuoden matematiikan ylioppilaskokeiden derivaattaan painottuvat tehtävät. Tehtäviä analysoidaan niin määrällisesti kuin laadullisestikin sekä lyhyen että pitkän matematiikan osalta. Tutkimus osoittaa, että derivaattatehtäviä on ollut lyhyen matematiikan ylioppilaskokeissa parhaimmillaan kolme kappaletta, kun taas pitkässä matematiikassa suurin derivaattatehtäväesiintyvyys nousee jopa kuuteen tehtävään per koe. Lyhyen matematiikan kokeissa ei abstrakteiksi luokiteltavia tehtäviä ole ollut laisinkaan, pitkässä matematiikassa niitä on ollut muutamia. Kaikissa matematiikan ylioppilaskokeissa derivaattapainotteiset tehtävät ovat olleet hyvin pitkälti soveltaviksi luokiteltavia. Ylioppilaskokelaiden osaamista tutkitaan Ylioppilastutkintolautakunnan laatimien pisteytysohjeiden avulla. Analyysissä huomataan, että yksittäisten derivaattatehtäviksi luokiteltujen derivaattaosuuksista saatavat suurimmat pistemäärät ovat noin kolmasosan luokkaa sekä lyhyen että pitkän matematiikan ylioppilaskokeissa. Pisteytysohjeiden mukaista tehtäväanalyysiä tehdään tutkielmassa yksityiskohtaisesti. Tutkielman suurin painoarvo on derivaattatehtävien pistejakaumissa ja niiden analysoinnissa. Työssä tutkitaan tehtävistä saatuja pistemääriä tehtävien tyypin, laadun ja vaikeusasteiden mukaan. Tutkimus osoittaa, että lyhyessä matematiikalla osataan sekä parhaiten että huonoiten funktion derivointia sekä sen arvon laskemista tietyssä pisteessä. Perustehtävistä saatiin enemmän pisteitä kuin soveltavista tehtävistä ja helppoja tehtäviä osattiin selkeästi paremmin kuin vaikeusasteeltaan haastavampia tehtäviä. Kuitenkin lyhyen matematiikan opiskelijat valitsivat kokeissa eniten soveltavia tehtäviä sekä vaikeusasteeltaan abstrakteja tehtäviä. Pitkän matematiikan kirjoittaneet taas osaavat parhaiten perinteistä funktion derivointia ja huonoiten funktion derivoituvuuden tarkasteluun liittyviä sovelluksia. Tehtävien laatu- ja vaikeusasteluokittelussa hajontaa esiintyi jonkin verran. Tutkimus osoittaa, että pitkän matematiikan opiskelijat valitsevat ylioppilaskokeissa mieluiten ääriarvotehtäviä ja vähemmälle suosiolle jäävät derivoituvuuden tutkimiseen painottuvat tehtävät.
  • Shahriyer, Ahmed Hasan (2020)
    The local sources influence the spatial distribution of air pollutants in urban settings, and these can be quite diverse. For better air quality forecasting, constant monitoring of pollutants, and a high volume of measurements are necessary at many locations. Building a dense air quality network by only using the reference instruments is expensive and not feasible. The use of complementary sensor like Vaisala AQT 420 can help achieve the goal of creating a robust air quality network. As part of the Helsinki metropolitan Air Quality Testbed (HAQT) project, AQT 420 was tested for its suitability as a complmentary component in an air quality monitoring network. AQT 420 is capable of measuring NO2, PM2.5, PM10, CO, O3, SO2, relative humidity (RH), temperature, wind speed (WS), wind direction (WD), and air pressure (AP). Proxies for condensation sink (CS), black carbon (BC), particles number concentration (N), and Pegasor AQ urban diffusion current (PAQDCLDSA, which can be parameterized to calculate lung deposited surface area (LDSA) concentrations) were developed for an urban background site in Helsinki, Finland. The intention is to use variables measured by the AQT 420 and predict additional variables by using proxies. Proxy variables will help to maximize the output of AQT 420 sensors, and giving extra data extraction capability from the sensors. PM2.5, NO2, RH and temperature yielded reliable proxies for both CS and PAQDCLDSA with the correlation coefficient r, 0.85 and 0.83, respectively. While, PM2.5, NO2, and NO2, RH were enough to produce satisfactory proxy parameters for BC (r, 0.80), and N (r, 0.76), respectively. Additionally, a campaign data for sulfuric acid (SA) from Helsinki, Finland site was used to produce a proxy for SA. SO2, global radiation, CS and RH gave the best version of that proxy (r, 0.85).
  • Aho, Jari (Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2008)
    The purpose of this study is to describe the development of application of mass spectrometry for the structural analyses of non-coding ribonucleic acids during past decade. Mass spectrometric methods are compared of traditional gel electrophoretic methods, the characteristics of performance of mass spectrometric, analyses are studied and the future trends of mass spectrometry of ribonucleic acids are discussed. Non-coding ribonucleic acids are short polymeric biomolecules which are not translated to proteins, but which may affect the gene expression in all organisms. Regulatory ribonucleic acids act through transient interactions with key molecules in signal transduction pathways. Interactions are mediated through specific secondary and tertiary structures. Posttranscriptional modifications in the structures of molecules may introduce new properties to the organism, such as adaptation to environmental changes or development of resistance to antibiotics. In the scope of this study, the structural studies include i) determination of the sequence of nucleobases in the polymer chain, ii) characterisation and localisation of posttranscriptional modifications in nucleobases and in the backbone structure, iii) identification of ribonucleic acid-binding molecules and iv) probing of higher order structures in the ribonucleic acid molecule. Bacteria, archaea, viruses and HeLa cancer cells have been used as target organisms. Synthesised ribonucleic acids consisting of structural regions of interest have been frequently used. Electrospray ionisation (ESI) and matrix-assisted laser desorption ionisation (MALDI) have been used for ionisation of ribonucleic analytes. Ammonium acetate and 2-propanol are common solvents for ESI. Trihydroxyacetophenone is the optimal MALDI matrix for ionisation of ribonucleic acids and peptides. Ammonium salts are used in ESI buffers and MALDI matrices as additives to remove cation adducts. Reverse phase high performance liquid chromatography has been used for desalting and fractionation of analytes either off-line of on-line, coupled with ESI source. Triethylamine and triethylammonium bicarbonate are used as ion pair reagents almost exclusively. Fourier transform ion cyclotron resonance analyser using ESI coupled with liquid chromatography is the platform of choice for all forms of structural analyses. Time-of-flight (TOF) analyser using MALDI may offer sensitive, easy-to-use and economical solution for simple sequencing of longer oligonucleotides and analyses of analyte mixtures without prior fractionation. Special analysis software is used for computer-aided interpretation of mass spectra. With mass spectrometry, sequences of 20-30 nucleotides of length may be determined unambiguously. Sequencing may be applied to quality control of short synthetic oligomers for analytical purposes. Sequencing in conjunction with other structural studies enables accurate localisation and characterisation of posttranscriptional modifications and identification of nucleobases and amino acids at the sites of interaction. High throughput screening methods for RNA-binding ligands have been developed. Probing of the higher order structures has provided supportive data for computer-generated three dimensional models of viral pseudoknots. In conclusion. mass spectrometric methods are well suited for structural analyses of small species of ribonucleic acids, such as short non-coding ribonucleic acids in the molecular size region of 20-30 nucleotides. Structural information not attainable with other methods of analyses, such as nuclear magnetic resonance and X-ray crystallography, may be obtained with the use of mass spectrometry. Sequencing may be applied to quality control of short synthetic oligomers for analytical purposes. Ligand screening may be used in the search of possible new therapeutic agents. Demanding assay design and challenging interpretation of data requires multidisclipinary knowledge. The implement of mass spectrometry to structural studies of ribonucleic acids is probably most efficiently conducted in specialist groups consisting of researchers from various fields of science.
  • Keskioja, Sanna (Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2007)
    Requirements engineering is an important phase in software development where customer's needs and expectations are transformed into a software requirements specification. The requirements specification can be considered as an agreement between the customer and the developer where both parties agree on the expected system features and behaviour. However, requirements engineers must deal with a variety of issues that complicate the requirements process. The communication gap between the customer and the developers is among typical reasons for unsatisfactory requirements. In this thesis we study how the use case technique could be used in requirements engineering in bridging the communication gap between the customer and development team. We also discuss how a use case description can be use cases can be used as a basis for acceptance test cases.
  • Widenius, Jyrki (2024)
    The spread of the global pandemic COVID-19 has revealed gaps in knowledge of the spread of airborne diseases. More research is needed to mitigate, to find best protective practices and to prevent future incidents. Human cough offers an effective pathway for virions to spread between people and should therefore be in the focal point of the research. Artificial coughing would be of great help when studying for example spreading of virions in space or effectiveness of different protective measures aimed to hinder the diseases from spreading. For these needs a novel artificial coughing head robot is here presented. This microcontroller-operated coughing head aerosol generator has been developed in Tampere University Aerosol Physics Laboratory in collaboration with Technical Research Centre of Finland Ltd., VTT. It applies proportional valve and expansion tank to produce coughs from pressurized air. Coughed aerosol is produced by shear forces of air flow guided through a mesh grid wetted with artificial saliva. The presented aerosol generator can produce realistic human-like coughs that contain up to 3,1 x 106 aerosol particles per cough. Diameter of the particles is lognormally distributed, having an aerodynamic geometric mean diameter of 1,29 (±0,07) μm and geometric standard deviation of 2,89 (±0,11) with the found settings. Particle production can be adjusted by changing the grid or liquid concentration and amount. The cough flow profile i.e., the strength of it as well as rise-, on-, and fall-time of the cough are software adjustable. Maximum momentary peak flow velocity is close to 800 l/min. The exit velocity of a cough plume is 3,9 m/s and it reaches 3 m distance in 6,5 s. All these values fit among the literature found ones for human produced coughs. In addition to characterization test, a small measurement campaign with real surrogate virus was made. In campaign experiments it was found that applying the cough generator with a face mask reduced the exposure of a receiver to test virus aerosol particles at least 85 % compared to a reference measurement without masks, whereas by applying the receiver only with a mask had essentially no effect on its exposure.
  • Särkijärvi, Joona (2023)
    Both descriptive combinatorics and distributed algorithms are interested in solving graph problems with certain local constraints. This connection is not just superficial, as Bernshteyn showed in his seminal 2020 paper. This thesis focuses on that connection by restating the results of Bernshteyn. This work shows that a common theory of locality connects these fields. We also restate the results that connect these findings to continuous dynamics, where they found that solving a colouring problem on the free part of the subshift 2^Γ is equivalent to there being a fast LOCAL algorithm solving this problem on finite sections of the Cayley graph of Γ. We also restate the result on the continuous version of Lovász Local Lemma by Bernshteyn. The LLL is a powerful probabilistic tool used throughout combinatorics and distributed computing. They proved a version of the lemma that, under certain topological constraints, produces continuous solutions.
  • Meaney, Alexander (2015)
    X-ray computed tomography (CT) is widely used in medical imaging and materials science. In this imaging modality, cross-sectional images of a physical object are formed by taking numerous X-ray projections from different angles and then applying a reconstruction algorithm to the measured data. The cross-sectional slices can be used to form a three-dimensional model of the interior structure of the object. CT is a prime example of an inverse problem, in which the aim is to recover an unknown cause from a known effect. CT technology continues to develop, motivated by the desire for increased image quality and spatial resolution in reconstructions. In medical CT, reducing patient dose is a major goal. The branch of CT known as X-ray microtomography (micro-CT) produces reconstructions with spatial resolutions in the micrometer range. Micro-CT has been practiced at the University of Helsinki since 2008. The research projects are often interdisciplinary, combining physics with fields such as biosciences, paleontology, geology, geophysics, metallurgy and food technology. This thesis documents the design and construction of a new X-ray imaging system for computed tomography. The system is a cone beam micro-CT scanner intended for teaching and research in inverse problems and X-ray physics. The scanner consists of a molybdenum target X-ray tube, a sample manipulator, and a flat panel detector, and it is built inside a radiation shielding cabinet. Measurements were made for calibrating the measurement geometry and for testing reconstruction quality. Two-dimensional reconstructions of various samples were computed using the plane which passes through the X-ray point source and is perpendicular to the axis of rotation. This central plane of the cone beam reduces to fan beam geometry. All reconstructions were computed using the filtered backprojection (FBP) algorithm, which is the industry standard. Tomographic reconstructions of high quality were obtained from the measurements. The results show that the imaging system is well suited for CT and the study of reconstruction algorithms.