Publication Details

Objective Argument Summarization in Search

authored by
Timon Ziegenbein, Shahbaz Syed, Martin Potthast, Henning Wachsmuth
Abstract

Decision-making and opinion formation are influenced by arguments from various online sources, including social media, web publishers, and, not least, the search engines used to retrieve them. However, many, if not most, arguments on the web are informal, especially in online discussions or on personal pages. They can be long and unstructured, subjective and emotional, and contain inappropriate language. This makes it difficult to find relevant arguments efficiently. We hypothesize that, on search engine results pages,“objective snippets” of arguments are better suited than the commonly used extractive snippets and develop corresponding methods for two important tasks: snippet generation and neutralization. For each of these tasks, we investigate two approaches based on (1) prompt engineering for large language models (LLMs), and (2) supervised models trained on existing datasets. We find that a supervised summarization model outperforms zero-shot summarization with LLMs for snippet generation. For neutralization, using reinforcement learning to align an LLM with human preferences for suitable arguments leads to the best results. Both tasks are complementary, and their combination leads to the best snippets of arguments according to automatic and human evaluation.

Organisation(s)
Natural Language Processing Section
External Organisation(s)
Leipzig University
Center for Scalable Data Analytics and Artificial Intelligence Dresden/Leipzig (ScaDS.AI)
University of Kassel
Type
Conference contribution
Pages
335-351
No. of pages
17
Publication date
17.07.2024
Publication status
Published
Peer reviewed
Yes
ASJC Scopus subject areas
Theoretical Computer Science, General Computer Science
Electronic version(s)
https://doi.org/10.1007/978-3-031-63536-6_20 (Access: Open)