undefined

Dealing with a small amount of data : developing Finnish sentiment analysis

Julkaisuvuosi

2022

Tekijät

Toivanen, Ida; Lindroos, Jari; Räsänen, Venla; Taipale, Sakari

Tiivistelmä

Sentiment analysis has been more and more prominently visible among all natural language processing tasks. Sentiment analysis entails information extraction of opinions, emotions, and sentiments. In this paper, we aim to develop and test language models for low-resource language Finnish. We use the term “low-resource” to describe a language lacking in available resources for language modeling, especially annotated data. We investigate four models: the state-of-the-art FinBERT [1], and competitive alternative BERT models Finnish ConvBERT [2], Finnish Electra [3], and Finnish RoBERTa [4]. Having a comparative framework of multiple BERT variations is connected to our use of additional methods that are implemented to counteract the lack of annotated data. Basing our sentiment analysis on partly annotated survey data collected from eldercare workers, we supplement our training data with additional data sources. In addition to the non-annotated section of our survey data, additional data (external in-domain dataset and open-source news corpus) are focused on to determine how training data can be increased with the use of methods like pretraining (masked language modeling) and pseudo-labeling. Pretraining and pseudo-labeling, often defined as semi-supervised learning methods, make it possible to utilize unlabeled data either by initializing the model, or by labeling unlabeled data samples with seemingly real labels prior to actual model implementation. Our results suggest that out of all the single BERT models, FinBERT performs the best for our use case. Moreover, applying ensemble learning and combining multiple models further betters model performance and predictive power, and it outperforms a single FinBERT model. The use of both pseudo-labeling and ensemble learning proved to be valuable assets in the extension of training data for low-resource languages such as Finnish. However, with pseudo labeling, proper regularization methods should be considered to prevent confirmation bias from affecting the model performance.
Näytä enemmän

Organisaatiot ja tekijät

Jyväskylän yliopisto

Toivanen Ida Orcid -palvelun logo

Lindroos Jari Orcid -palvelun logo

Taipale Sakari Orcid -palvelun logo

Räsänen Venla Orcid -palvelun logo

Julkaisutyyppi

Julkaisumuoto

Artikkeli

Emojulkaisun tyyppi

Konferenssi

Artikkelin tyyppi

Muu artikkeli

Yleisö

Tieteellinen

Vertaisarvioitu

Vertaisarvioitu

OKM:n julkaisutyyppiluokitus

A4 Artikkeli konferenssijulkaisussa

Avoin saatavuus

Avoin saatavuus kustantajan palvelussa

Ei

Rinnakkaistallennettu

Kyllä

Muut tiedot

Tieteenalat

Tietojenkäsittely ja informaatiotieteet; Kielitieteet

Avainsanat

[object Object],[object Object]

Julkaisumaa

Yhdysvallat (USA)

Kustantajan kansainvälisyys

Kansainvälinen

Kieli

englanti

Kansainvälinen yhteisjulkaisu

Ei

Yhteisjulkaisu yrityksen kanssa

Ei

DOI

10.1109/besc57393.2022.9995536

Julkaisu kuuluu opetus- ja kulttuuriministeriön tiedonkeruuseen

Kyllä