Publication Details

Modern Talking in Key Point Analysis: Key Point Matching using Pretrained Encoders

authored by
Jan Heinrich Reimer, Thi Kim Hanh Luu, Max Henze, Yamen Ajjour
Abstract

We contribute to the ArgMining 2021 shared task on Quantitative Summarization and Key Point Analysis with two approaches for argument key point matching. For key point matching the task is to decide if a short key point matches the content of an argument with the same topic and stance towards the topic. We approach this task in two ways: First, we develop a simple rule-based baseline matcher by computing token overlap after removing stop words, stemming, and adding synonyms/antonyms. Second, we fine-tune pretrained BERT and RoBERTa language models as a regression classifier for only a single epoch. We manually examine errors of our proposed matcher models and find that long arguments are harder to classify. Our fine-tuned RoBERTa-Base model achieves a mean average precision score of 0.913, the best score for strict labels of all participating teams.

External Organisation(s)
Martin Luther University Halle-Wittenberg
Type
Conference contribution
Pages
175-183
No. of pages
9
Publication date
01.11.2021
Publication status
Published
Peer reviewed
Yes
Electronic version(s)
https://doi.org/10.18653/v1/2021.argmining-1.18 (Access: Open)