Institut für Künstliche Intelligenz LUHAI Institut Neuigkeiten
Invited Talk by Mayukh Das on Tuesday, May 13, 2025, at 16:00

Invited Talk by Mayukh Das on Tuesday, May 13, 2025, at 16:00

Portrait of guest speaker Mayukh Das from the Technical University of Braunschweig, standing in front of a white background, wearing glasses and a blue striped shirt. Portrait of guest speaker Mayukh Das from the Technical University of Braunschweig, standing in front of a white background, wearing glasses and a blue striped shirt. Portrait of guest speaker Mayukh Das from the Technical University of Braunschweig, standing in front of a white background, wearing glasses and a blue striped shirt.

On Tuesday, May 13, 2025, Mayukh Das will give an invited talk on bias constraints in large language models, covering challenges in generation, evaluation, and reasoning.

We are pleased to invite you to our upcoming invited talk by Mayukh Das from the Technical University of Braunschweig. The talk will take place on Tuesday, May 13, 2025, at 16:00, in Room 1101.F138 (Welfengarten 1).

# Speaker

Mayukh Das

Techinal University of Braunschweig 

https://www.tu-braunschweig.de/ifis/staff/mayukh-das

 

# Time and location

Tuesday, May 13, 16:00

1101.F138 (Welfengarten 1) 

 

# Title

Bias Constraints in LLMs: Challenges in Generation, Evaluation, and Reasoning

 

# Abstract

Large Language Models (LLMs) show impressive abilities across many natural language tasks, but their performance is fundamentally limited by different kinds of bias. In this talk, I'll explore three main challenges related to bias in LLMs. First, I'll explain how biases introduced during the decoding process can lead LLMs to generate repetitive, stereotypical, or unfaithful outputs, and why it's difficult to fix this without hurting fluency and coherence. Second, I'll show that our current methods for measuring bias are often too narrow, focusing only on individual sentences while missing broader patterns that emerge across longer texts. This makes fairness evaluations less reliable. Third, I'll discuss how LLMs often default to typical or obvious answers in complex reasoning tasks, particularly when non-stereotypical or counterfactual thinking is needed. Together, these studies reveal that bias isn't just a surface issue but a deep structural challenge for current LLM architectures and evaluation methods.