Back to the Roots
Predicting the Source Domain of Metaphors using Contrastive Learning
- authored by
- Meghdut Sengupta, Milad Alshomary, Henning Wachsmuth
- Abstract
Metaphors frame a given target domain using concepts from another, usually more concrete, source domain. Previous research in NLP has focused on the identification of metaphors and the interpretation of their meaning. In contrast, this paper studies to what extent the source domain can be predicted computationally from a metaphorical text. Given a dataset with metaphorical texts from a finite set of source domains, we propose a contrastive learning approach that ranks source domains by their likelihood of being referred to in a metaphorical text. In experiments, it achieves reasonable performance even for rare source domains, clearly outperforming a classification baseline.
- Organisation(s)
-
Natural Language Processing Section
Institute of Artificial Intelligence
- Type
- Conference contribution
- Pages
- 137-142
- No. of pages
- 6
- Publication date
- 2022
- Publication status
- Published
- Peer reviewed
- Yes
- ASJC Scopus subject areas
- Language and Linguistics, Artificial Intelligence, Computer Science Applications, Linguistics and Language
- Sustainable Development Goals
- SDG 4 - Quality Education