Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by study line "ingen studieinriktning"

Sort by: Order: Results:

  • Mylläri, Sanna (2020)
    Objective. Depression is associated with increased risk of chronic disease, which may be at least partly due to poor health behaviors. Growing body of evidence has associated depression with unhealthy diet. However, the association of depression with diet quality in the long run is not well known. Furthermore, it is unclear if dietary interventions could mitigate the harmful association of depression with diet. This study examined the association of depression with diet both cross-sectionally and longitudinally in a population-based prospective cohort. The effectiveness of an early-onset dietary intervention in modifying these associations was investigated. Methods. The sample (n = 457) was from The Special Turku Coronary Risk Factor Intervention Project (STRIP). The intervention group (n = 209) had undergone a dietary intervention lasting from age of 7 months until age of 20 years. Depression was measured at age 20 using Beck Depression Inventory II (BDI-II). Diet quality was assessed at ages 20 and 26 using a diet score calculated based on food diaries. Missing values were replaced using multiple imputation by chained equations. Linear regression analyses were used to analyze the association of depression at age 20 with diet at ages 20 and 26, as well as the modifying effect of intervention group on these associations. Results. No cross-sectional association was found for depression and diet at age 20. Depression at age 20 was longitudinally associated with worse diet quality at age 26. The associations did not differ between intervention and control groups at either of the time points. Conclusions. Contrary to previous research, this study did not find cross-sectional association for depression with diet. However, this study offers novel information on longitudinal associations, suggesting that depression may have effects on diet quality that can manifest after several years. Dietary intervention was not found effective in modifying these associations. Since long-term effects on diet may be an important factor explaining the association of depression with chronic diseases, ways to mitigate the adverse consequences of depression for diet should be explored further.
  • Sipilä, Katariina (2023)
    Infants´ and children´s diet influences normal development and overall health. A balanced diet providing essential nutrients is crucial. Recent research has examined the dietary patterns of children and infants, exploring potential associations between food components and the emergence of illnesses. Notably, investigations into relationships between dietary factors and metabolite have gained prominence. Metabolomics offers a means to investigate individual´s nutrition, health status, illnesses, and the interaction of medications and contaminants. This study aimed to elucidate the connections between diet and the serum metabolic profiles of 1-year-old Finnish children. This master´s thesis used data from Finnish infants (n=439) collected by 3-day food record and questionnaires, in conjunction with metabolite assessments from blood samples collected at the age of 12 months. The investigation particularly focused on cow´s milk products and breast milk. Spearman correlation coefficient served as the primary statistical tool utilising data derived from the DIPP Nutrition study. Infant diets´ primarily comprised various cow´s milk products, milks and infant formulas. Noteworthy findings revealed that distinct lipids and free fatty acids, significantly associated with cow´s milk product consumption and breastfeeding. In the future, this study holds potential for enhancing comprehension of diet-related disease development by employing metabolites as markers to dissect dietary impacts.
  • Virtanen, Lotta (2022)
    Digital health technologies strive to facilitate physicians’ burdensome work, but their implementations seem to have brought new stressors. Adapting to new ways of working can take time, and positive changes may not be noticed until later. This study aimed to examine associations of longer-term perceived work change due to digitalisation and intensity of digital work with job strain among physicians. Differences in possible associations were examined according to the length of work experience. This study analysed cross-sectional data of the Electronic Health Records as Physicians’ Tools Study collected from physicians (N=4271) working in Finland between January and March 2021. Job strain was measured with three outcomes: stress related to information systems (SRIS), time pressure, and stress. Opinions about how work had changed in the past 3 years were assessed with six statements based on the goals of digitalisation. The intensity of digital work was measured by the number of information systems used and the frequency of telemedicine work conducted. The associations were examined in multivariable linear and logistic regression analyses and adjusted for background variables. The physicians’ mean SRIS and time pressure scores were 3.5 and 3.7, respectively (scales 1–5; a higher score indicated higher strain levels), and 60% reported stress. The majority disagreed with the statement regarding accelerated clinical encounters due to digitalisation, which was associated with higher SRIS (b=.23, 95% CI [.16, .30]) and time pressure (b=.12, 95% CI [.04, .20]). Disagreement with facilitated access to patient information was associated with higher SRIS (b=.15, 95% CI [.07, .23]) and disagreement with supported decision-making was associated with higher SRIS (b=.11, 95% CI [.05, .18]) and greater odds of stress (OR=1.26, 95% CI [1.06, 1.48]). The more active role of patients received the greatest agreement among the physicians, and this agreement was associated with higher time pressure (b=.11, 95% CI [.04, .19]) and greater odds of stress (OR=1.19, 95% CI [1.02, 1.40]). Agreement with progressed interprofessional collaboration was also associated with higher time pressure (b=.10, 95% CI [.02, .18]). Intensive information systems use and intensive telemedicine work were consistently significantly associated with all job strain outcomes. Moreover, a significant interaction effect was found, as physicians who did intensive telemedicine work and had less than 6 years of work experience reported the highest time pressure. Not all physicians have felt that digitalisation would have facilitated their work, not even in the longer term, which may expose them to different strains. Digitalisation appears as a long work change process, where physicians would need constant support. In particular, early-career physicians could benefit from training to promote the time management skills needed for telemedicine work. Moreover, the functions, usability and interoperability of digital health technologies should be developed to support clinical encounters better. It would also be essential to improve the availability of digital support for patients in society, as physicians’ job strain related to patient activation might imply patients’ weak readiness to self-manage their health digitally.
  • Uvarova, Elizaveta (2024)
    Asteroids within our Solar System attract considerable attention for their potential impact on Earth and their role in elucidating the Solar System's formation and evolution. Understanding asteroids' composition is crucial for determining their origin and history, making spectral classification a cornerstone of asteroid categorization. Spectral classes, determined by asteroids' reflectance spectrum, offer insights into their surface composition. Early attempts at classification, predating 1973, utilized photometric observations in ultraviolet and visible wavelengths. The Chapman-McCord-Johnson classification system of 1973 marked the beginning of formal asteroid taxonomy, employing reflectance spectrum slopes for classification. Subsequent developments included machine learning techniques, such as principal component analysis and artificial neural networks, for improved classification accuracy. Gaia mission's Data Release 3 has significantly expanded asteroid datasets, allowing more extensive analyses. In this study, I examine the relationship between asteroid photometric slopes, spectra, and taxonomy using a feed-forward neural network trained on known spectral types to classify asteroids of unknown types. Our classification gained the mean accuracy of 80.4 ± 2.0 % over 100 iterations and separated successfully three asteroid taxonomic groups (C, S, and X) and the asteroid class D.
  • Heinonen, Outi (2020)
    Advertising has been an area of interest in linguistic research for the past decades due to its pervasiveness and role in shaping our notions and values in society. This study sets out to examine the socio-cultural practices in a specific field of advertising discourse, influencer marketing on social media. Influencer marketing is a recent branch of marketing, brought forth by the popularity of social media. In Influencer marketing, the products are marketed by individual influencers instead of corporations or companies. Brands turn to social media influencers in order to reach a wider audience and to utilize the influencers’ social media presence and their ability to influence their audience to promote their products and to turn this social power into capital. The hypothesis is that this key difference between traditional marketing and influencer marketing. The aim of this study is to present a critical discourse analysis to examine the social context and relationship between the influencer and the reader. The methodologic approach applied to the analysis of the data is critical discourse analysis, more specifically Norman Fairclough’s three-dimensional model. Critical discourse analysis and Fairclough’s model allow focus on the linguistic properties in addition to the production and reception processes of discourse and the socio-cultural practices within discourse. The data and its analysis deemed that influencer marketing reveals consumerist ideologies that promote purchasing of goods as a means to reach happiness and well-being, as presented by the social media influencers.
  • Paavola, Jaakko (2024)
    Lenders assess the credit risk of loan applicants from both affordability and indebtedness perspective. The affordability perspective involves assessing the applicant’s disposable income after accounting for regular household expenditures and existing credit commitments, a measure called money-at-disposal or MaD. Having an estimate of the applicant’s expenditures is crucial, but simply asking applicants for their expenditures could lead to inaccuracies. Thus, lenders must produce their own estimates based on statistical or survey data about household expenditures, which are then passed to the MaD framework as input parameters or used as control limits to ascertain expenditure information reported by the applicant is truthful or at least adequately conservative. More accurate expenditure estimates in the loan origination would enable lenders to quantify mortgage credit risk more precisely, tailor loan terms more aptly, and protect customers against over-indebtedness better. Consequently, this would facilitate the lenders to be more profitable in their lending business as well as serve their customers better. But there is also a need for interpretability of the estimates stemming from compliance and trustworthiness motives. In this study, we examine the accuracy and interpretability of expenditure predictions of supervised models fitted to a microdataset of household consumption expenditures. To our knowledge, this is the first study to use such a granular and broad dataset to create predictive models of loan applicants’ expenditures. The virtually uninterpretable "black box" models we used, aiming at maximizing predictive power, rarely did better accuracy-wise than interpretable linear regression ones. Even when they did, the gain was marginal or in predicting minor expenditure categories that contributed only a low share of the total expenditures. Thus, ordinary linear regression is what we suggest generally provides the best combination of predictive power and interpretability. After careful feature selection, the best predictive power was attained with 20-54 predictor variables, the number depending on the expenditure category. If a very simple interpretation is needed, we suggest either a linear regression model of three predictor variables representing the number of household members, or a model based on the means within 12 "common sense groups" that we divided the households in. An alternative solution with a predictive power somewhere between the full linear regression model and the two simpler models is to use decision trees providing easy interpretation in the form of a set of rules.
  • Ulkuniemi, Uula (2022)
    This thesis presents a complication risk comparison of the most used surgical interventions for benign prostatic hyperplasia (BPH). The investigated complications are the development of either a post-surgery BPH recurrence (reoperation), an urethral stricture or stress incontinence severe enough to require a surgical procedure for their treatment. The analysis is conducted with survival analysis methods on a data set of urological patients sourced from the Finnish Institute for Health and Welfare. The complication risk development is estimated with the Aalen-Johansen estimator and the effects of certain covariates on the complication risks is estimated with the Cox PH regression model. One of the regression covariates is the Charlson Comorbidity Index score, which attempts to quantify a disease load of a patient at a certain point in time as a single number. A novel Spark algorithm was designed to facilitate the efficient calculation of the Charlson Comorbidity Index score on a data set of the same size as the one used in the analyses here. The algorithm achieved at least similar performance to the previously available ones and scaled better on larger data sets and with stricter computing resource constraints. Both the urethral stricture and urinary incontinence endpoints suffered from a lower number of samples, which made the associated results less accurate. The estimated complication probabilities in both endpoint types were also so low that the BPH procedures couldn’t be reliably differentiated. In contrast, BPH reoperation risk analyses yielded noticeable differences among the initial BPH procedures. Regression analysis results suggested that the Charlson Comoborbidity Index score isn’t a particularly good predictor in any of the endpoints. However, certain cancer types that are included in the Charlson Comorbidity Index score did predict the endpoints well when used as separate covariates. An increase in the patient’s age was associated with a higher complication risk, but less so than expected. In the urethral stricture and urinary incontinence endpoints the number of preceding BPH operations was usually associated with a notable complication risk increase.
  • Rasola, Miika (2020)
    Resonant inelastic X-ray scattering (RIXS) is one of the most powerful synchrotron based methods for attaining information of the electronic structure of materials. Novel ultra-brilliant X-ray sources, X-ray free electron lasers (XFEL), offer new intriguing possibilities beyond the traditional synchrotron based techniques facilitating the transition of X-ray spectroscopic methods to the nonlinear intensity regime. Such nonlinear phenomena are well known in the optical energy range, less so in X-ray energies. The transition of RIXS to the nonlinear region could have significant impact on X-ray based materials research by enabling more accurate measurements of previously observed transitions, allowing the detection of weakly coupled transitions on dilute samples and possibly uncovering completely unforeseen information or working as a platform for novel intricate methods of the future. The nonlinear RIXS or stimulated RIXS (SRIXS) on XFEL has already been demonstrated in the simplest possible proof of concept case. In this work a comprehensive introduction to SRIXS is presented from a theoretical point of view starting from the very beginning, thus making it suitable for anyone with the basic understanding of quantum mechanics and spectroscopy. To start off, the principles of many body quantum mechanics are revised and the configuration interactions method for representing molecular states is introduced. No previous familiarity with X-ray matter interaction or RIXS is required as the molecular and interaction Hamiltonians are carefully derived, based on which a thorough analysis of the traditional RIXS theory is presented. In order to stay in touch with the real world, the basic experimental facts are recapped before moving on to SRIXS. First, an intuitive picture of the nonlinear process is presented shedding some light onto the term \textit{stimulated} while introducing basic terminology and some X-ray pulse schemes along with futuristic theoretical examples of SRIXS experiments. After this, a careful derivation of the Maxwell-Liouville-von Neumann theory up to quadrupole order is presented for the first time ever. Finally, the chapter is concluded with a short analysis of the experimental status quo on XFELs and some speculation on possible transition metal samples where SRIXS in its current state could be applied to observe quadrupole transitions advancing the field remarkably.
  • Berglund, Jenny Johanna (2021)
    Avhandlingens syfte är att undersöka hur orenhet, äckel och hot skapar konstskräck i utvalda berättelser från Clive Barkers Books of Blood och Thomas Ligottis Songs of a Dead Dreamer. Båda verken är novellsamlingar från 1980-talet, och kan klassificeras som skräcklitteratur. Min analys koncentrerar sig på hotfulla utrymmen och omgivningar i novellerna och beskrivningar av abjekta och äckliga kroppar. Min analys grundar sig på Noël Carrolls teori om hur konstskräck, en sorts fiktiv skräckkänsla som kan uppstå genom att läsa skräcklitteratur, alltid består av en kombination av både äckel och hot. Äcklet innehåller alltid någon form av orenhet, och det orena skapas då någon kategorisk gräns överträds. Jag använder mig dessutom av Julia Kristevas koncept om abjektet, som också beskriver något orent och innebär en sorts gränsöverträdelse; abjektet är någonting som är en blandning av vanligtvis skilda kategorier och kan skapa en våldsam chock för den som vittnar en blandning eller störning av dessa kategorier. Jag använder också Martha Nussbaums teori om äckel för att analysera det orena i verken. Min analys visar hur karaktärernas trygga utrymmen som beskrivs i verken invaderas av olika monster och väsen som är orena, och förvandlar dem till hotfulla omgivningar genom sin närvaro. Dessa inkräktare kan vara olika varelser, som spöken, demoner och mördare, eller mer abstrakta varelser och krafter som symboliserar ett mer ogripbart hot. I Barkers noveller sker invasionen ofta stegvis i större, allmänna utrymmen som storstäder, och i Ligottis verk i mer privata omgivningar som det egna hemmet. Avhandlingen undersöker också varför äckliga och orena kroppar ses som abjekt: kropparna bryter någon sorts kategorisk gräns och orsakar en djup och radikal störning i vår världsbild eller självbild som följd. I Barkers noveller skapas olika kroppsabjekt bl.a. via skildringar av kroppar med sjukliga egenskaper och drag som påminner om lik, genom att beskriva överträdelser av accepterade gränser gällande fysisk kontakt med djur och detaljerade beskrivningar av lidande människokroppar. I Ligottis verk riktas abjektet och äcklet i stället mot den egna kroppen, då karaktärerna eller deras kroppsdelar ofta genomgår olika formskiftningar och förvandlingar eller smittas ner med främmande substanser. De abjekta monsterkropparna kan symbolisera delar i olika förtryckande samhällssystem, och dessa system tvingar ofta karaktärerna att bli en del av dessa hierarkier. Hotet som monstren och de abjekta kropparna skapar kan vara enbart fysiskt, men också ha moraliska, psykologiska eller politiska nivåer och kan rikta sig mot enskilda karaktärer eller större samhällen och gemenskaper. De orena skräckfigurerna representerar och förkroppsligar alla dessa hot, och när de möts i skräcklitteratur, fungerar de som fysiska manifestationer och spegelbilder av karaktärernas rädslor och skapar konstskräck.
  • Pentikäinen, Katariina (2023)
    This study investigated Finnish students’ attitudes towards English accents. Since the topic had not been studied recently in the Finnish context and mostly with upper secondary students, the study set out to provide current insight into the topic. Furthermore, to observe how age, education, and proficiency level are related to the attitudes, students at both lower and upper secondary level and at both regular and bilingual (Fi-En) programs were selected as informants. The attitudes were studied via a combination of direct and indirect methods. First, a widely used indirect attitudinal measure, the verbal-guise technique, was employed to observe what kinds of immediate reactions different accents elicit in the students. The students heard authentic samples of eight different accents, two native and six non-native, and evaluated them on various adjectives on a semantic differential scale. Second, the students answered questionnaire items regarding their familiarity with accents and their views and practices in relation to speaking English. Altogether 156 students completed the survey. Mainly quantitative analysis of the students’ answers showed that although their attitudes towards the accents were mostly positive, they had significantly more negative attitudes towards non-native than native English accents. Furthermore, accent strength seemed to be a discriminating factor with mild-accented speakers preferred over heavy-accented speakers. While all students showed very similar preferences with respect to the accents, the strength of the attitudes somewhat varied between the respondent groups with bilingual upper secondary students indicating more positive attitudes than the rest of the students. With respect to the different characteristics that the speakers were rated on, the students considered native speakers very competent, intelligent and fluent, whereas non-native speakers were regarded as more honest than competent. The students were very adept at categorizing the speakers as native or non-native; however, apart from the British and heavy Finnish accent, they struggled with recognizing the speakers’ accents. Although no significant correlation was found between accent recognition and attitude, the better the students identified the speaker as non-native, the more negative attitudes they showed towards the speaker. The study found that although Finnish students’ attitudes towards non-native accents have become more positive compared to previous studies, they are still somewhat subject to the native speaker, standard language ideology. Further research is still needed to provide deeper insight into the ideologies functioning behind this, at least covert, preference for native speakers.
  • Aarne, Onni (2022)
    The content we see is increasingly determined by ever more advanced recommender systems, and popular social media platform TikTok represents the forefront of this development (See Chapter 1). There has been much speculation about the workings of these recommender systems, but precious little systematic, controlled study (See Chapter 2). To improve our understanding of these systems, I developed sock puppet bots that consume content on TikTok as a normal user would (See Chapter 3). This allowed me to run controlled experiments to see how the TikTok recommender system would respond to sock puppets exhibiting different behaviors and preferences in a Finnish context, and how this would differ from the results obtained by earlier investigations (See Chapter 4). This research was done as part of a journalistic investigation in collaboration with Long Play. I found that TikTok appears to have adjusted their recommender system to personalize content seen by users to a much lesser degree, likely in response to a previous investigation by the WSJ. However, I came to the conclusion that, while sock puppet audits can be useful, they are not a sufficiently scalable solution to algorithm governance, and other types of audits with more internal access are needed (See Chapter 5).
  • Hurme, Erika (2023)
    This thesis examines Finnish upper secondary school English teachers’ practices and beliefs regarding authenticity and autonomy in the EFL classroom. The aim of the study is to find out how EFL teachers promote experiences of authenticity and learner autonomy in the classroom and this way create connections to students’ use of English outside school. The study is also interested in English teachers’ attitudes towards authenticity and autonomy as well as the teachers’ support for students’ Extramural English use. Research on EFL learners’ Extramural English use has reported of a gap between formal and informal language learning settings, and this thesis investigates the applicability of experiences of authenticity and learner autonomy in bridging this gap. The data of the study consists of qualitative classroom observations and interviews with English teachers. Four upper secondary school English teachers participated in the study. Three lessons were observed from each teacher, which adds up to twelve observed lessons in total. The observations focused on the teachers’ motivational practice and teaching materials. In addition, semi-structured retrospective interviews were conducted with the teachers after the classroom observations. Qualitative content analysis was applied to both sets of data to describe the teachers’ practices and attitudes towards authenticity and autonomy in language learning. The data analysis shows that while the teachers used a variety of motivational strategies to promote authenticity and autonomy in the classroom, each teacher also had their preferred motivational practices that characterised their teaching. Comparing the classroom observation data and the interview data revealed a connection between the teachers’ practices and their definitions and attitudes towards authenticity in language learning. While the teachers considered authenticity and autonomy important in language learning, they perceived promoting them in class as difficult due to constraints such as available time and materials. Authenticity and autonomy were promoted in the classroom mostly by using strategies of teacher discourse, which aimed at arguing for the relevance or purpose of the learning tasks and connecting the learning to students’ everyday lives. Interestingly, the teachers were not especially keen on supporting their students’ Extramural English practices and questioned whether students desire experiences of authenticity and autonomy at all in school. The results of the study shed light on the complex relationship between formal and informal language learning settings from the EFL teachers’ perspective.
  • Lahti, Anni (2020)
    Objectives. Autism spectrum disorder (ASD) is a neurobiological developmental disorder that involves challenges in social interaction and restricted/repetitive behaviors. Since generalization and maintenance of acquired skills is essential in the rehabilitation of ASD, it is important to integrate interventions into the home environment by parental guidance. There has been some research on the rehabilitation of children with ASD in Finland, but no research has been conducted on the guidance of parents from the perspective of speech therapist. The purpose of this study is to find out the views of parental guidance from speech therapists who rehabilitate children with ASD. Interviews with speech therapists will clarify the ways in which parents of children with ASD are guided through speech therapy and the challenges and contributing factors in parental guidance. Methods.The research method was a semi-structured interview. Five speech pathologists with experience in the rehabilitation of ASD were interviewed. The interviews were recorded and transcribed. The data was analyzed by content analysis. Results and conclusion. Parental guidance of children with ASD was divided into information sharing, interaction & discussion and direct guidance. The challenges were parental strain, parent attitude and in some cases multiculturalism. Contributing factors appeared in training practices and home conditions. Challenges and benefits were influenced by the individuality of families. Speech therapists hoped for more opportunities to arrange separate parental guidance sessions so that they would be able to discuss deeper about the methods and family situation without the child’s presence. Speech therapists considered parental guidance important in the rehabilitation of children with ASD because, with the guidance of parents it helped to increase skills in everyday life and guaranteed training intensity. As parental strain was identified as a challenge in this study, it would be important to explore how they could be more effectively supported during rehabilitation. In addition, it could be explored whether separate parental guidance sessions should be increased or whether the number of parental guidance sessions has been adapted through the development of new working practices.
  • Tikander, Katarina (2023)
    Objectives. Autism spectrum disorder (ASD) has been assoaciated with anomalies in pain sensitivity, although the results of the studies have not been concordant. Since sensory atypicalities are a frequent feature in ASD, it has led to the hypothesis of sensory dysfunction which affects the whole sensory system, including the pain system. ASD has also been associated with increased pain disturbance in previous studies. This Master`s thesis investigated the relationship between ASD traits and pain intensity, pain interference, and pain sensitivity. Moreover, the aim was to study if sensory atypicalities have a different impact on pain intensity, pain interference, and pain sensitivity, when accompanied with ASD traits. Methods. The sample consisted of 947 adults aged between 18 and 60 years. The data were collected using an online questionnaire, which contained items about ASD traits, pain intensity and pain interference from the past week, pain sensitivity in different situations, and sensory hyper- and hyposensitivies. In addition, there were items on backround information relevant for the study in the questionnaire. The subjects were divided into two groups based on the score obtained from the ASD trait questionnaire: the ASD trait group and the reference group. Results and conclusions. There was a significant negative association between ASD traits and pain intensity, such that the estimates of pain intensity were significantly lower in the ASD trait group than in the reference group, despite there being more self-reported comorbidities and chronic pain presented in the ASD trait group. There were no significant associations between ASD traits and pain interference or pain sensitivity. Furthermore, there was a significant interaction between ASD traits and sensory atypicalities in pain intensity, pain interference, and pain sensitivity; as the number of sensory atypicalities increased, pain intensity, pain interference, and pain sensitivity increased significantly in the reference group. Instead, the impact of sensory atypicalities on pain was significantly weaker in individuals with ASD traits. The results imply that individuals with ASD traits may have lower pain sensitivity in everyday life, but regular pain sensitivity in specific pain situations. The impact of sensory atypicalities on pain seems to be stronger in individuals without ASD traits, which does not provide support for the hypothesis of sensory dysfunction as an underlying mechanism of pain sensitivity in ASD.
  • Kinnunen, Samuli (2024)
    Chemical reaction optimization is an iterative process that targets identifying reaction conditions that maximize reaction output, typically yield. The evolution of optimization techniques has progressed from intuitive approaches to simple heuristics, and more recently, to statistical methods such as Design of Experiments approach. Bayesian optimization, which iteratively updates beliefs about a response surface and suggests parameters both exploiting conditions near the known optima and exploring uncharted regions, has shown promising results by reducing the number of experiments needed for finding the optimum in various optimization tasks. In chemical reaction optimization, the method allows minimizing the number of experiments required for finding the optimal reaction conditions. Automated tools like pipetting robots hold potential to accelerate optimization by executing multiple reactions concurrently. The integration of Bayesian optimization to automation reduces not only the workload and throughput but also optimization efficiency. However, adoption of these advanced techniques faces a barrier, as chemists often lack proficiency in machine learning and programming. To bridge this gap, Automated Chemical Reaction Optimization Software (ACROS) is introduced. This tool orchestrates an optimization loop: Bayesian optimization suggests reaction candidates, the parameters are translated into commands for a pipetting robot, the robot executes the operations, a chemist interprets the results, and data is fed back to the software for suggesting the next reaction candidates. The optimization tool was evaluated empirically using a numerical test function, in a Direct Arylation reaction dataset, and in real-time optimization of Sonogashira and Suzuki coupling reactions. The findings demonstrate that Bayesian optimization efficiently identifies optimal conditions, outperforming Design of Experiments approach, particularly in optimizing discrete parameters in batch settings. Three acquisition functions; Expected Improvement, Log Expected Improvement and Upper Confidence Bound; were compared. It can be concluded that expected improvement-based methods are more robust, especially in batch settings with process constraints.
  • Ilse, Tse (2019)
    Background: Electroencephalography (EEG) depicts electrical activity in the brain, and can be used in clinical practice to monitor brain function. In neonatal care, physicians can use continuous bedside EEG monitoring to determine the cerebral recovery of newborns who have suffered birth asphyxia, which creates a need for frequent, accurate interpretation of the signals over a period of monitoring. An automated grading system can aid physicians in the Neonatal Intensive Care Unit by automatically distinguishing between different grades of abnormality in the neonatal EEG background activity patterns. Methods: This thesis describes using support vector machine as a base classifier to classify seven grades of EEG background pattern abnormality in data provided by the BAby Brain Activity (BABA) Center in Helsinki. We are particularly interested in reconciling the manual grading of EEG signals by independent graders, and we analyze the inter-rater variability of EEG graders by building the classifier using selected epochs graded in consensus compared to a classifier using full-duration recordings. Results: The inter-rater agreement score between the two graders was κ=0.45, which indicated moderate agreement between the EEG grades. The most common grade of EEG abnormality was grade 0 (continuous), which made up 63% of the epochs graded in consensus. We first trained two baseline reference models using the full-duration recording and labels of the two graders, which achieved 71% and 57% accuracy. We achieved 82% overall accuracy in classifying selected patterns graded in consensus into seven grades using a multi-class classifier, though this model did not outperform the two baseline models when evaluated with the respective graders’ labels. In addition, we achieved 67% accuracy in classifying all patterns from the full-duration recording using a multilabel classifier.
  • Valta, Akseli Eero Juhana (2023)
    Puumala orthohantavirus (PUUV) is a single stranded negative sense RNA virus, carried by the bank vole (Myodes glareolus). Like other orthohantaviruses, it does not cause visible symptoms in the host species, but when transmitted to humans, it can cause a mild version of hemorrhagic fever with renal syndrome (HFRS) called nephropathia epidemica (NE). PUUV is the only pathogenic orthohantavirus that is endemic to Finland, where it has a relatively high incidence of approximately 35 in 100 000 inhabitants or 1000 to 3000 diagnosed cases annually. Here we describe a miniaturized immunofluorescence assay (mini-IFA) for measuring antibody response against PUUV from bank vole whole blood and heart samples as well as from patient serum samples. The method outline was based on the work done by Pietiäinen et al., (2022), but it was adapted for the detection of PUUV antibodies. Transfected cells expressing the PUUV structural proteins (N, GPC, Gn and Gc) were used instead of PUUV infected cells, which allowed for performing all steps outside of bio-safety level 3 (BSL3) conditions. This method also enables the simultaneous measurement of IgM, IgA and IgG antibody response from each sample in a more efficient and higher output manner, when compared to traditional immunofluorescence methods. Our results show that the method is effective for testing large amounts of samples for PUUV antibodies and it allows for quick and convenient access to high-quality images that can be used for both detecting interesting targets for future studies, as well as producing a visual archive of the test results.
  • Aaltonen, Topi (2024)
    Positron annihilation lifetime spectroscopy (PALS) is a method used to analyse the properties of materials, namely their composition and what kind of defects they might consist of. PALS is based on the annihilation of positrons with the electrons of a studied material. The average lifetime of a positron coming into contact with a studied material depends on the density of electrons in the surroundings of the positron, with higher densities of electrons naturally resulting in faster annihilations on average. Introducing positrons in a material and recording the annihilation times results in a spectrum that is, in general, a noisy sum of exponential decays. These decay components have lifetimes that depend on the different density areas present in the material, and relative intensities that depend on the fractions of each area in the material. Thus, the problem in PALS is inverting the spectrum to get the lifetimes and intensities, a problem known as exponential analysis in general. A convolutional neural network architecture was trained and tested on simulated PALS spectra. The aim was to test whether simulated data could be used to train a network that could predict the components of PALS spectra accurately enough to be usable on spectra gathered from real experiments. Reasons for testing the approach included trying to make the analysis of PALS spectra more automated and decreasing user-induced bias compared to some other approaches. Additionally, the approach was designed to require few computational resources, ideally being trainable and usable on a single computer. Overall, testing showed that the approach has some potential, but the prediction performance of the network depends on the parameters of the components of the target spectra, with likely issues being similar to those reported in previous literature. In turn, the approach was shown to be sufficiently automatable, particularly once training has been performed. Further, while some bias is introduced in specifying the variation of the training data used, this bias is not substantial. Finally, the network can be trained without considerable computational requirements within a sensible time frame.
  • Kovanen, Veikko (2020)
    Real estate appraisal, or property valuation, requires strong expertise in order to be performed successfully, thus being a costly process to produce. However, with structured data on historical transactions, the use of machine learning (ML) enables automated, data-driven valuation which is instant, virtually costless and potentially more objective compared to traditional methods. Yet, fully ML-based appraisal is not widely used in real business applications, as the existing solutions are not sufficiently accurate and reliable. In this study, we introduce an interpretable ML model for real estate appraisal using hierarchical linear modelling (HLM). The model is learned and tested with an empirical dataset of apartment transactions in the Helsinki area, collected during the past decade. As a result, we introduce a model which has competitive predictive performance, while being simultaneously explainable and reliable. The main outcome of this study is the observation that hierarchical linear modelling is a very potential approach for automated real estate appraisal. The key advantage of HLM over alternative learning algorithms is its balance of performance and simplicity: this algorithm is complex enough to avoid underfitting but simple enough to be interpretable and easy to productize. Particularly, the ability of these models to output complete probability distributions quantifying the uncertainty of the estimates make them suitable for actual business use cases where high reliability is required.
  • Lintunen, Milla (2023)
    Fault management in mobile networks is required for detecting, analysing, and fixing problems appearing in the mobile network. When a large problem appears in the mobile network, multiple alarms are generated from the network elements. Traditionally Network Operations Center (NOC) process the reported failures, create trouble tickets for problems, and perform a root cause analysis. However, alarms do not reveal the root cause of the failure, and the correlation of alarms is often complicated to determine. If the network operator can correlate alarms and manage clustered groups of alarms instead of separate ones, it saves costs, preserves the availability of the mobile network, and improves the quality of service. Operators may have several electricity providers and the network topology is not correlated with the electricity topology. Additionally, network sites and other network elements are not evenly distributed across the network. Hence, we investigate the suitability of a density-based clustering methods to detect mass outages and perform alarm correlation to reduce the amount of created trouble tickets. This thesis focuses on assisting the root cause analysis and detecting correlated power and transmission failures in the mobile network. We implement a Mass Outage Detection Service and form a custom density-based algorithm. Our service performs alarm correlation and creates clusters of possible power and transmission mass outage alarms. We have filed a patent application based on the work done in this thesis. Our results show that we are able to detect mass outages in real time from the data streams. The results also show that detected clusters reduce the number of created trouble tickets and help reduce of the costs of running the network. The number of trouble tickets decreases by 4.7-9.3% for the alarms we process in the service in the tested networks. When we consider only alarms included in the mass outage groups, the reduction is over 75%. Therefore continuing to use, test, and develop implemented Mass Outage Detection Service is beneficial for operators and automated NOC.