Computational Explanation

Educational applications of natural language processing (NLP) and other AI techniques include the computational support of writing under consideration of learner-specific characteristics. Explainability expresses the desire to make a system’s behavior intelligible and thus controllable by humans. The NLP Group is investigating respective topics more and more deeply, with a focus on the analysis and generation of natural language explanations. The following topics are in the focus in this regard.

Computational Writing Support

In the context of our research project ArgSchool, we study how to support learning to write texts through computational methods that provide developmental feedback. These methods assess and explain which aspects of a text are good, which need to be improved, and how to improve them, adapted to the student’s learning stage.

Early work from our group in this context includes the computational prediction of essay quality dimensions (COLING 2016). Recently, we study cultural differences of learner texts (ArgMining 2022).

Dialogical Explanation Models

In a project within TRR 318 "Constructing Explainability", we study dialogical explanation quality in terms of how explainers achieve an intended form of understanding on the side of an explainee. In particular, we investigate the question as to what characteristics successful explaining processes share in general, and what is specific to a given context. The first result is a annotated corpus with explanatory dialogues (COLING 2022).

In a second project within TRR 318, we model the metaphorical space established by different metaphors for one and the same concept. We study how metaphors foster understanding through highlighting and hiding, and we aim to establish knowledge about when and how metaphors are used and adapted in explanatory dialogues. As part of this, we used constrastive learning to identify the conceptual metaphor in texts (FigLang 2022).

Explanation Generation

We study the generation of used-adapted explanations, among others, in the context of a project in the CRC 901 "On-the-Fly Computing". Groundwork for explanations that adapt to the language of the user includes style transfer (INLG 2018) and controlled neural reframing (EMNLP Findings 2021).