Data efficient deep learning models for text classification

Garreta, Raúl

Supervisor(es): Moncecchi, Guillermo - Wonsever, Dina

Resumen:

Text classification is one of the most important techniques within natural language processing. Applications range from topic detection and intent identification to sentiment analysis. Usually text classification is formulated as a supervised learning problem, where a labeled training set is fed into a machine learning algorithm. In practice, training supervised machine learning algorithms such as those present in deep learning, require large training sets which involves a considerable amount of human labor to manually tag the data. This constitutes a bottleneck in applied supervised learning, and as a result, it is desired to have supervised learning models that require smaller amounts of tagged data. In this work, we will research and compare supervised learning models for text classification that are data efficient, that is, require small amounts of tagged data to achieve state of the art performance levels. In particular, we will study transfer learning techniques that reuse previous knowledge to train supervised learning models. For the purpose of comparison, we will focus on opinion polarity classification, a sub problem within sentiment analysis that assigns polarity to an opinion (positive or negative) depending on the mood of the opinion holder. Multiple deep learning models to learn representations of texts including BERT, InferSent, Universal Sentence Encoder and the Sentiment Neuron are compared in six datasets from different domains. Results show that transfer learning dramatically improves data efficiency, obtaining double digit improvements in F1 score just with under 100 supervised training examples.


Detalles Bibliográficos
2020
Text classification
Natural language processing
Sentiment analysis
Deep learning
Transfer learning
Inglés
Universidad de la República
COLIBRI
https://hdl.handle.net/20.500.12008/34000
Acceso abierto
Licencia Creative Commons Atribución - No Comercial - Sin Derivadas (CC - By-NC-ND 4.0)
Resumen:
Sumario:Text classification is one of the most important techniques within natural language processing. Applications range from topic detection and intent identification to sentiment analysis. Usually text classification is formulated as a supervised learning problem, where a labeled training set is fed into a machine learning algorithm. In practice, training supervised machine learning algorithms such as those present in deep learning, require large training sets which involves a considerable amount of human labor to manually tag the data. This constitutes a bottleneck in applied supervised learning, and as a result, it is desired to have supervised learning models that require smaller amounts of tagged data. In this work, we will research and compare supervised learning models for text classification that are data efficient, that is, require small amounts of tagged data to achieve state of the art performance levels. In particular, we will study transfer learning techniques that reuse previous knowledge to train supervised learning models. For the purpose of comparison, we will focus on opinion polarity classification, a sub problem within sentiment analysis that assigns polarity to an opinion (positive or negative) depending on the mood of the opinion holder. Multiple deep learning models to learn representations of texts including BERT, InferSent, Universal Sentence Encoder and the Sentiment Neuron are compared in six datasets from different domains. Results show that transfer learning dramatically improves data efficiency, obtaining double digit improvements in F1 score just with under 100 supervised training examples.