Browsing by study line "Algoritmer"
Now showing items 1-20 of 75
-
(2023)In recent years, classical neural networks have been widely used in various applications and have achieved remarkable success. However, with the advent of quantum computing, there is a growing interest in quantum neural networks (QNNs) as a potential alternative to classical machine learning. In this thesis, we study the architectures of quantum and classical neural networks. We also investigate the performance of QNNs compared to classical neural networks from various aspects, such as vanishing gradient, trainability, expressivity. Our experiments demonstrate that QNNs have the potential to outperform classical neural networks in specific scenarios. While more powerful QNNs exhibit improved performance compared to classical neural networks, our findings also indicate that less powerful QNNs may not always yield significant improvements. This suggests that the effectiveness of QNNs in surpassing classical approaches is contingent on factors such as network architecture, optimization techniques, problem complexity.
-
(2024)Indexes are data structures that are used for retrieving records from a database. They are used in database management systems (DBMS) to optimize queries. The abundance of available data has motivated the research of indexes to provide faster query times and to have smaller memory usage. With the advantages of machine learning, new variations of indexes have been created. These indexes utilize the data distribution for faster query times and smaller footprints in comparison to traditional indexes like B-tree and B+ tree. These indexes are known as the learned indexes. In this thesis, we study the effect the distribution change between normal distributions has on one-dimensional learned indexes and compare that to the B+ tree. We conduct an experiment where we simulate the distribution change and measure the insertion and query times. In this experiment, we include three learned indexes, which are ALEX, PGM, and LIPP.
-
(2022)Many real-world problem settings give rise to NP-hard combinatorial optimization problems. This results in a need for non-trivial algorithmic approaches for finding optimal solutions to such problems. Many such approaches—ranging from probabilistic and meta-heuristic algorithms to declarative programming—have been presented for optimization problems with a single objective. Less work has been done on approaches for optimization problems with multiple objectives. We present BiOptSat, an exact declarative approach for finding so-called Pareto-optimal solutions to bi-objective optimization problems. A bi-objective optimization problem arises for example when learning interpretable classifiers and the size, as well as the classification error of the classifier should be taken into account as objectives. Using propositional logic as a declarative programming language, we seek to extend the progress and success in maximum satisfiability (MaxSAT) solving to two objectives. BiOptSat can be viewed as an instantiation of the lexicographic method and makes use of a single SAT solver that is preserved throughout the entire search procedure. It allows for solving three tasks for bi-objective optimization: finding a single Pareto-optimal solution, finding one representative solution for each Pareto point, and enumerating all Pareto-optimal solutions. We provide an open-source implementation of five variants of BiOptSat, building on different algorithms proposed for MaxSAT. Additionally, we empirically evaluate these five variants, comparing their runtime performance to that of three key competing algorithmic approaches. The empirical comparison in the contexts of learning interpretable decision rules and bi-objective set covering shows practical benefits of our approach. Furthermore, for the best-performing variant of BiOptSat, we study the effects of proposed refinements to determine their effectiveness.
-
(2019)This thesis presents a wavelet-based method for detecting moments of fast change in the textual contents of historical newspapers. The method works by generating time series of the relative frequencies of different words in the newspaper contents over time, and calculating their wavelet transforms. Wavelet transform is essentially a group of transformations describing the changes happening in the original time series at different time scales, and can therefore be used to pinpoint moments of fast change in the data. The produced wavelet transforms are then used to detect fast changes in word frequencies by examining products of multiple scales of the transform. The properties of the wavelet transform and the related multi-scale product are evaluated in relation to detecting various kinds of steps and spikes in different noise environments. The suitability of the method for analysing historical newspaper archives is examined using an example corpus consisting of 487 issues of Uusi Suometar from 1869–1918 and 250 issues of Wiipuri from 1893–1918. Two problematic features in the newspaper data, noise caused by OCR (optical character recognition) errors and uneven temporal distribution of the data, are identified and their effects on the results of the presented method are evaluated using synthetic data. Finally, the method is tested using the example corpus, and the results are examined briefly. The method is found to be adversely affected especially by the uneven temporal distribution of the newspaper data. Without additional processing, or improving the quality of the examined data, a significant amount of the detected steps are due to the noise in the data. Various ways of alleviating the effect are proposed, among other suggested improvements on the system.
-
(2021)Quantum computing has an enormous potential in machine learning, where problems can quickly scale to be intractable for classical computation. A Boltzmann machine is a well-known energy-based graphical model suitable for various machine learning tasks. Plenty of work has already been conducted for realizing Boltzmann machines in quantum computing, all of which have somewhat different characteristics. In this thesis, we conduct a survey of the state-of-the-art in quantum Boltzmann machines and their training approaches. Primarily, we examine variational quantum Boltzmann machine, a specific variant of quantum Boltzmann machine suitable for the near-term quantum hardware. Moreover, as variational quantum Boltzmann machine heavily relies on variational quantum imaginary time evolution, we effectively analyze variational quantum imaginary time evolution to a great extent. Compared to the previous work, we evaluate the execution of variational quantum imaginary time evolution with a more comprehensive collection of hyperparameters. Furthermore, we train variational quantum Boltzmann machines using a toy problem of bars and stripes, representing more multimodal probability distribution than the Bell states and the Greenberger-Horne-Zeilinger states considered in the earlier studies.
-
(2023)Nowadays power consumption is a hot and actual domain. Efficient energy consumption allows you to use resources from the environment wisely and, moreover, switch to alternative energy sources where it is possible. This thesis is aimed at analyzing elevator’s power consumption data (average per every 5 minutes and 1 hour). The data has been gathered for several years, so it is a time series. This thesis includes review of time series models, which then can be used for the consequent analysis. Main directions are forecasting power consumption, capturing trends and anomalies. In addition, time series data may also be used for calculating average power consumption for each elevator inside the elevator group. As an outcome, spread of the power consumption across 4 elevators inside the elevator group may be seen. One of the thesis’ goals is to check whether it is even or not.
-
(2021)Musical pattern discovery refers to the automated discovery of important repeated patterns, such as melodies and themes, from music data. Several algorithms have been developed to solve this problem, but evaluating the algorithms has been difficult without proper visualisations of the output of the algorithms. To address this issue a web application named Mupadie was built. Mupadie accepts MIDI music files as input and visualises the outputs of musical pattern discovery algorithms, with implementations of SIATEC and TTWIA built in the application. Other algorithms can be visualised if the algorithm output is uploaded to Mupadie as a JSON file that follows a specified data structure. Using Mupadie, an evaluation of SIATEC and TTWIA was conducted. Mupadie was found to be a useful tool in the qualitative evaluation of these musical pattern discovery algorithms; it helped reveal systematically recurring issues with the discovered patterns, some previously known and some previously undocumented. The findings were then used to suggest improvements to the algorithms.
-
(2022)Story generation is an artificial intelligence task in which a computer program is used to create literature or stories. This kind of task usually involves giving an initial scene, characters, background information and goals, and then letting the computer program automatically generate a storyline and complete the narrative of the story. Transformers are widely used and achieved state of the art for many different natural language processing tasks, including story generation. With the help of attention mechanism, transforms can overcome overfittting and achieved great results. Generative Pre-trained Transformer (GPT) series are one of the best transformers, which attract many researchers. In this thesis, transformer models are used to design and implement a machine learning method for the generation of very short stories. By introducing a commonsense knowledge base and a rule generator based on it, the models can learn the relationships between context and generate coherent narratives. By given the first sentence of the story as the input, the model can complete the story. The model is based on GPT-2 model and COINS. The dataset used is a collection of short stories. By comparing with the generated results of different models in many aspects, we proved the effectiveness of the model. In addition, the compared results are analyzed to find the potential optimization methods.
-
(2024)Audio signals offer invaluable insights into system operational conditions and potential malfunctions. Proactive fault detection in machinery and other infrastructures through audio monitoring provides significant advantages in numerous sectors, such as industrial maintenance, healthcare, and urban security. Localizing anomalies within the spectral content of audio data opens possibilities to not only diagnose but also effectively address the underlying issues. This thesis addresses the challenge of comprehensively capturing the full context of anomalies detected within audio data. To achieve this, we have developed a novel unsupervised method that adapts visual anomaly localization techniques specifically for the analysis of audio data. This approach utilizes visual representations of audio signals, particularly spectrograms, to apply the Student-Teacher Feature Pyramid Matching Method (STFPM) within an unsupervised learning framework. By harnessing the inherent visual patterns in audio data, our method enables precise localization of anomalies. By augmenting the MIMII dataset with synthetic anomalies and conducting extensive testing, we validated our approach’s ability to localize anomalies in audio data. The findings confirm that our model not only detects but also precisely pinpoints the location of these artificially introduced anomalies within audio spectrograms in terms of both time and frequency. This demonstrates the precision and reliability of our approach, highlighting its potential as a promising solution for accurately localizing anomalies in various audio applications.
-
(2021)Computing the permanent of a matrix is a famous #P-hard problem with a wide range of applications. The fastest known exact algorithms for the problem require an exponential number of operations, and all known fully polynomial randomized approximation schemes are rather complicated to implement and have impractical time complexities. The most promising recent advancements on approximating the permanent are based on rejection sampling and upper bounds for the permanent. In this thesis, we improve the current state of the art by developing the deep rejection sampling method, which combines an exact algorithm with the rejection sampling method. The algorithm precomputes a dynamic programming table that tightens the initial upper bound used by the rejection sampling method. In a sense, the table is used to jump-start the sampling process. We give a high probability upper bound for the time complexity of the deep rejection sampling method for random (0, 1)-matrices in which each entry is 1 with probability p. For matrices with p < 1/5, our high probability bound is stronger than in previous work. In addition to that, we empirically observe that our algorithm outperforms earlier rejection sampling methods by testing it with different parameters against other algorithms on multiple classes of matrices. The improvements in sampling times are especially notable in cases in which the ratios of the permanental upper bounds and the exact value of the permanent are huge.
-
(2023)The multi-armed bandit is a sequential decision making problem where an agent chooses actions and receives rewards. The agent faces an explore-exploit dilemma: it has to balance exploring its options to find the optimal actions, and exploiting choosing the empirically best actions. This problem can also be solved by multiple agents who collaborate in a federated learning setting, where agents do not share their raw data samples. Instead, small updates containing learned parameters are shared. In this setting, the learning process can happen with a central server that coordinates the agents to learn the global model, or in a fully decentralized fashion where agents communicate with each other to collaborate. The distribution of rewards may be heterogeneous, meaning that the agents face distributions with local biases. Depending on the context, this can be handled by cancelling the biases by averaging, or by personalizing the global model to fit each individual agent’s local biases. Another common characteristic of federated multi-armed bandits is preserving privacy. Even though only parameter updates are shared, they can be used to infer the original data. To privatize the data, a method known as differential privacy is applied by adding enough random noise to mask the effect of a single contribution. The newest area of interest for federated multi-armed bandits is security. Collaboration between multiple agents means more opportunities for attacks. Achieving robust security means defending against Byzantine attacks that inject arbitrary data into the learning process to affect the model accuracy in an undesirable way. This thesis is a literature review that explores how the federated multi-armed bandit problem is solved, and how privacy and security for it is achieved.
-
(2020)Automatic readability assessment is considered as a challenging task in NLP due to its high degree of subjectivity. The majority prior work in assessing readability has focused on identifying the level of education necessary for comprehension without the consideration of text quality, i.e., how naturally the text flows from the perspective of a native speaker. Therefore, in this thesis, we aim to use language models, trained on well-written prose, to measure not only text readability in terms of comprehension but text quality. In this thesis, we developed two word-level metrics based on the concordance of article text with predictions made using language models to assess text readability and quality. We evaluate both metrics on a set of corpora used for readability assessment or automated essay scoring (AES) by measuring the correlation between scores assigned by our metrics and human raters. According to the experimental results, our metrics are strongly correlated with text quality, which achieve 0.4-0.6 correlations on 7 out of 9 datasets. We demonstrate that GPT-2 surpasses other language models, including the bigram model, LSTM, and bidirectional LSTM, on the task of estimating text quality in a zero-shot setting, and GPT-2 perplexity-based measure is a reasonable indicator for text quality evaluation.
-
(2020)Automatic readability assessment is considered as a challenging task in NLP due to its high degree of subjectivity. The majority prior work in assessing readability has focused on identifying the level of education necessary for comprehension without the consideration of text quality, i.e., how naturally the text flows from the perspective of a native speaker. Therefore, in this thesis, we aim to use language models, trained on well-written prose, to measure not only text readability in terms of comprehension but text quality. In this thesis, we developed two word-level metrics based on the concordance of article text with predictions made using language models to assess text readability and quality. We evaluate both metrics on a set of corpora used for readability assessment or automated essay scoring (AES) by measuring the correlation between scores assigned by our metrics and human raters. According to the experimental results, our metrics are strongly correlated with text quality, which achieve 0.4-0.6 correlations on 7 out of 9 datasets. We demonstrate that GPT-2 surpasses other language models, including the bigram model, LSTM, and bidirectional LSTM, on the task of estimating text quality in a zero-shot setting, and GPT-2 perplexity-based measure is a reasonable indicator for text quality evaluation.
-
(2023)Context: Over the past years, the development of machine learning (ML) enabled software has seen a rise in popularity. Alongside this trend, new challenges have been identified, such as growing concerns about the use, including the ethical concerns, of ML models, as misuse can lead to severe consequences for human beings. To alleviate this problem, more comprehensive model documentation has been suggested, but how can that documentation be made part of a modern, continuous development process? Objective: We design and develop a solution, which consists of a software artefact and its surrounding process, which enables and moderates continuous documentation of ML models. The solution needs to comply with the modern way-of-working of software development. Method: We apply the design science research methodology to divide the design and development into six separate tasks, i.e., problem identification, objective definition, design and development, demonstration, evaluation, and communication. Results: The solution uses model cards for storing model details. These model cards are tested automatically and manually, forming a quality gate and ensuring integrity of the documentation. The software artefact is implemented in the form of a GitHub Action. Conclusion: We conclude that the software artefact supports and assures proper model documentation in the form of a model card. The artefact allows for customization by the user, thereby supporting domain-specific use cases.
-
(2022)Anomaly detection in images is the machine learning task of classifying inputs as normal or anomalous. Anomaly localization is the related task of segmenting input images into normal and anomalous regions. The output of an anomaly localization model is a 2D array, called an anomaly map, of pixel-level anomaly scores. For example, an anomaly localization model trained on images of non-defective industrial products should output high anomaly scores in image regions corresponding to visible defects. In unsupervised anomaly localization the model is trained solely on normal data, i.e. without labelled training observations that contain anomalies. This is often necessary as anomalous observations may be hard to obtain in sufficient quantities and labelling them is time-consuming and costly. Student-teacher feature pyramid matching (STFPM) is a recent and powerful method for unsupervised anomaly detection and localization that uses a pair of convolutional neural networks of identical architecture. In this thesis we propose two methods of augmenting STFPM to produce better segmentations. Our first method, discrepancy scaling, significantly improves the segmentation performance of STFPM by leveraging pre-calculated statistics containing information about the model’s behaviour on normal data. Our second method, student-teacher model assisted segmentation, uses a frozen STFPM model as a feature detector for a segmentation model which is then trained on data with artificially generated anomalies. Using this second method we are able to produce sharper anomaly maps for which it is easier to set a threshold value that produces good segmentations. Finally, we propose the concept of expected goodness of segmentation, a way of assessing the performance of unsupervised anomaly localization models that, in contrast to current metrics, explicitly takes into account the fact that a segmentation threshold needs to be set. Our primary method, discrepancy scaling, improves segmentation AUROC on the MVTec AD dataset over the base model by 13%, measured in the shrinkage of the residual (1.0 − AUROC). On the image-level anomaly detection task, a variant of the discrepancy scaling method improves performance by 12%.
-
(2021)Rapid growth and advancement of next generation sequencing (NGS) technologies have changed the landscape of genomic medicine. Today, clinical laboratories perform DNA sequencing on a regular basis, which is an error prone process. Erroneous data affects downstream analysis and produces fallacious result. Therefore, external quality assessment (EQA) of laboratories working with NGS data is crucial. Validation of variations such as single nucleotide polymor- phism (SNP) and InDels (<50 bp) is fairly accurate these days. However, detection and quality assessment of large changes such as the copy number variation (CNV) continues to be a concern. In this work, we aimed to study the feasibility of an automated CNV concordance analysis for the laboratory EQA services. We benchmarked variants reported by 25 laboratories against the highly curated gold standard for the son (HG002/NA24385) of the askenazim trio from the Personal Genome Project published by the Genome in a Bottle Consortium (GIAB). We employed two methods to conduct concordance of CNVs, the sequence based comparison with Truvari and the in-house exome-based comparison. For deletion calls of two whole genome sequencing (WGS) submissions, Truvari gained a value greater than 88% and 68% for precision and recall respectively. Conversely, the in-house method’s precision and recall score peaked at 39% and 7.9% respectively for one WGS submission for both deletion and duplication calls. The results indicate that automated CNV concordance analysis of the deletion calls for the WGS-based callset might be feasible with Truvari. On the other hand, results for panel-based targeted sequencing for the deletion calls showed precision and recall rates ranging from 0-80% and 0-5.6% respectively with Truvari. The result suggests that automated concordance analysis of CNVs for targeted sequencing remains a challenge. In conclusion, CNV concordance analysis depends on how the sequence data is generated.
-
(2021)The importance of Automatic Speech Recognition cannot be underestimated in today’s worlds as they play a significant role in human computer interaction. ASR systems have been studied deeply over time, but their maximum potential is yet to be explored for the Finnish language. Development of a traditional ASR system involves a number of hand-crafted engineering which has made this technology quite difficult and resourceful to develop. However, with advancements in the field of neural networks, end-to-end ASR neural networks can be developed which can automatically learn the mappings of audio to its corresponding transcript., therefore reducing hand crafted engineering requirements. End-to-end neural network ASR systems have been largely developed commercially by tech giants such as Microsoft, Google and Amazon. However, there are limitations to these commercial services such as data privacy and cost of usage. In this thesis, we explored existing studies in the development of an end-to-end neural network ASR for Finnish language. One successful technique utilized in the development of neural network ASR in the advent of inadequate data is Transfer learning. This is the approach explored in this thesis for the development of the end-to-end neural network ASR system. In addition, the success of this approach was evaluated. In order to achieve this purpose, dataset collected from the Finnish Bank of Finland and Kaggle were used to fine-tune Mozilla DeepSpeech model which is a pretrained end-to-end neural network ASR in English language. The results obtained by fine-tuning the pretrained neural network ASR in English for Finnish language showed a word error rate as low as 40% and character error rate as low as 22%. We therefore concluded that transfer learning is a successful technique for creating ASR model for a new language using a pretrained model in another language with little effort, data and resources.
-
(2022)This thesis is a review of articles focusing on software assisted floor plan design for architecture. I group the articles into optimization, case based design, and machine learning, based on their use of prior examples. I then look into each category and further classify articles based on dimensions relevant to their overall approach. Case based design was a popular research field in the 1990s and early 2000s when several large research projects were conducted. However, since then the research has slowed down. Over the past 20 years, optimization methods to solve architectural floor plans have been researched extensively using a number of different algorithms and data models. The most popular approach is to use a stochastic optimization method such as a genetic algorithm or simulated annealing. More recently, a number of articles have investigated the possibility of using machine learning on architectural floor plans. The advent of neural networks and GAN models, in particular, has spurred a great deal of new research. Despite considerable research efforts, assisted floor plan design has not found its way into commercial applications. To aid industry adoption, more work is needed on the integration of computational design tools into the existing design workflows.
-
(2022)In this thesis we investigate the feasibility of machine learning methods for estimating the type and the weight of individual food items from images taken of customers’ plates at a buffet- style restaurant. The images were collected in collaboration with the University of Turku and Flavoria, a public lunch-line restaurant, where a camera was mounted above the cashier to automatically take a photo of the foods chosen by the customer when they went to pay. For each image, an existing system of scales at the restaurant provided the weights for each individual food item. We describe suitable model architectures and training setups for the weight estimation and food identification tasks and explain the models’ theoretical background. Furthermore we propose and compare two methods for utilizing a restaurant’s daily menu information for improving model performance in both tasks. We show that the models perform well in comparison to baseline methods and reach accuracy on par with other similar work. Additionally, as the images were captured automatically, in some of the images the food was occluded or blurry, or the image contained sensitive customer information. To address this we present computer vision techniques for preprocessing and filtering the images. We publish the dataset containing the preprocessed images along with the corresponding individual food weights for use in future research. The main results of the project have been published as a peer-reviewed article in the International Conference in Pattern Recognition Systems 2022. The article received the best paper award of the conference.
-
(2020)Computing an edit distance between strings is one of the central problems in both string processing and bioinformatics. Optimal solutions to edit distance are quadratic to the lengths of the input strings. The goal of this thesis is to study a new approach to approximate edit distance. We use a chaining algorithm presented by Mäkinen and Sahlin in "Chaining with overlaps revisited" CPM 2020 implemented verbatim. Building on the chaining algorithm, our focus is on efficiently finding a good set of anchors for the chaining algorithm. We present three approaches to computing the anchors as maximal exact matches: Bi-Directional Burrows-Wheeler Transform, Minimizers, and lastly, a hybrid implementation of the two. Using the maximal exact matches as anchors, we can efficiently compute an optimal chaining alignment for the strings. The chaining alignment further allows us to determine all such intervals where mismatches occur by looking at which sequences are not in the chain. Using these smaller intervals lets us approximate edit distance with a high degree of accuracy and a significant speed improvement. The methods described present a way to approximate edit distance in time complexity bounded by the number of maximal exact matches.
Now showing items 1-20 of 75