Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Author "Nevalainen, Janne"

Sort by: Order: Results:

  • Nevalainen, Janne (2020)
    Neural network based modern language models can reach state of the art performance on wide range of natural language tasks. Their success is based on capability to learn from large unlabeled data by pretraining, using transfer learning to learn strong representations for the language and transferring the learned into new domains and tasks. I look at how language models produce transfer learning for NLP. Especially from the viewpoint of classification. How transfer learning can be formally defined? I compare different LM implementations in theory and also use two example data sets for empirically testing their performance on very small labeled training data.