Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by study line "Språkteknologi"

Sort by: Order: Results:

  • Leal, Rafael (2020)
    In modern Natural Language Processing, document categorisation tasks can achieve success rates of over 95% using fine-tuned neural network models. However, so-called "zero-shot" situations, where specific training data is not available, are researched much less frequently. The objective of this thesis is to investigate how pre-trained Finnish language models fare when classifying documents in a completely unsupervised way: by relying only on their general "knowledge of the world" obtained during training, without using any additional data. Two datasets are created expressly for this study, since labelled and openly available datasets in Finnish are very uncommon: one is built using around 5k news articles from Yle, the Finnish Broacasting Company, and the other, 100 pieces of Finnish legislation obtained from the Semantic Finlex data service. Several language representation models are built, based on the vector space model, by combining modular elements: different kinds of textual representations for documents and category labels, different algorithms that transform these representations into vectors (TF-IDF, Annif, fastText, LASER, FinBERT, S-BERT), different similarity measures and post-processing techniques (such as SVD and ensemble models). This approach allows for a variety of models to be tested. The combination of Annif for extracting keywords and fastText for producing word embeddings out of them achieves F1 scores of 0.64 on the Finlex dataset and 0.73-0.74 on the Yle datasets. Model ensembles are able to raise these figures by up to three percentage points. SVD can bring these numbers to 0.7 and 0.74-0.75 respectively, but these gains are not necessarily reproducible on unseen data. These results are distant from the ones obtained from state-of-the-art supervised models, but this is a method that is flexible, can be quickly deployed and, most importantly, do not depend on labelled data, which can be slow and expensive to make. A reliable way to set the input parameter for SVD would be an important next step for the work done in this thesis.
  • De Bluts, Thomas (2021)
    Graph databases are an emerging technology enticing more and more software architects every day. The possibilities they offer to concretize data is incomparable to what other databases can do. They have proven their efficiency in certain domains such as social network architecture where relational data can be structured in a way that reflects reality better than what Relational Databases could provide. Their usage in linguistics has however been very limited, nearly inexistent, regardless of the countless times where linguists could make great use of a graph. This paper aims to demonstrate some of the use cases where graph databases could be of help to computational linguistics. For all these reasons, this thesis focuses on practical experiments where a Graph Database (in this case, Neo4j) is used to test its capabilities to serve linguistic data. The aim was to give a general starting point for further research on the topic. Two experiments are conducted, one with a continuous flow of relational textual data and one with a static corpus data based on the Universal Dependencies Treebanks. Queries are then performed against the database and the retrieval performances are evaluated. User-friendliness of the tools are also taken into account for the evaluation.