Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Author "Saastamoinen, Taneli"

Sort by: Order: Results:

  • Saastamoinen, Taneli (2020)
    Word2vec is a method for constructing so-called word embeddings, or word vectors, from natural text. Word embeddings are a compressed representation of word contexts, based on the original text. Such representations have many uses in natural language processing, as they contain a lot of contextual information for each word in a relatively compact and easily usable format. They can be used either for directly examining and comparing the contexts of words or as more informative representations of the original words themselves for various tasks. In this thesis, I investigate the theoretical underpinnings of word2vec, how word2vec works in practice and how it can be used and its results evaluated, and how word2vec can be applied to examine changes in word contexts over time. I also list some other applications of word2vec and word embeddings and briefly touch on some related and newer algorithms that are used for similar tasks. The word2vec algorithm, while mathematically fairly straightforward, involves several optimisations and engineering tricks that involve tradeoffs between theoretical accuracy and practical performance. These are described in detail and their impacts are considered. The end result is that word2vec is a very efficient algorithm whose results are nevertheless robust enough to be widely usable. I describe the practicalities of training and evaluating word2vec models using the freely available, open source gensim library for the Python programming language. I train numerous models with different hyperparameter settings and perform various evaluations on the results to gauge the goodness of fit of the word2vec model. The source material for these models comes from two corpora of news articles in Finnish from STT (years 1992-2018) and Yle (years 2011-2018). The practicalities of processing Finnish-language text with word2vec are considered as well. Finally, I use word2vec to investigate the changes of word contexts over time. This is done by considering word2vec models that were trained from the Yle and STT corpora one year at a time, so that the context of a given word can be compared between two different years. The main word I consider is "tekoäly" (Finnish for "artificial intelligence"); some related words are examined as well. The result is a comparison of the nearest neighbours of "tekoäly" and related words in various years across the two corpora. From this it can be seen that the context of these words has changed noticeably during the time considered. If the meaning of a word is taken to be inseparable from its context, we can conclude that the word "tekoäly" has meant something different in different years. Word2vec, as a quantitative method, provides a measurable way to gauge such semantic change over time. This change can also be visualised, as I have done. Word2vec is a stochastic method and as such its convergence properties deserve attention. As I note, the convergence of word2vec is by now well established, both through theoretical examination and the very numerous successful practical applications. Although not usually done, I repeat my analysis in order to examine the stability and convergence of word2vec in this particular case, concluding that my results are robust.