Details zu Publikationen

Moments Matter: Stabilizing Policy Optimization using Return Distributions

verfasst von
Dennis Jabs, Aditya Mohan, Marius Lindauer
Abstract

Deep Reinforcement Learning (RL) agents often learn policies that achieve the same episodic return yet behave very differently, due to a combination of environmental (random transitions, initial conditions, reward noise) and algorithmic (minibatch selection, exploration noise) factors. In continuous control tasks, even small parameter shifts can produce unstable gaits, complicating both algorithm comparison and real-world transfer. Previous work has shown that such instability arises when policy updates traverse noisy neighborhoods and that the negative tail of post-update return distribution R(θ) – obtained by repeatedly sampling minibatches, updating θ, and measuring final returns – is a useful indicator of this noise. Although explicitly constraining the policy to maintain a narrow R(θ) can improve stability, directly estimating R(θ) is computationally expensive in high-dimensional settings. We propose an alternative that takes advantage of environmental stochasticity to mitigate update-induced variability. Specifically, we model the state-action return distribution via a distributional critic and then bias the advantage function of PPO using higher-order moments (skewness and kurtosis) of this distribution. By penalizing extreme tail behaviors, our method discourages policies from entering parameter regimes prone to instability. We hypothesize that in environments where post-update critic values align poorly with post-update returns, standard PPO struggles to produce a narrow R(θ). In such cases, our moment-based correction significantly narrows R(θ), improving stability by up to 75% in Walker2D while preserving comparable evaluation returns.

Organisationseinheit(en)
Fachgebiet Maschinelles Lernen
Institut für Künstliche Intelligenz
Typ
Abstract in Konferenzband
Publikationsdatum
15.02.2025
Publikationsstatus
Angenommen/Im Druck
Peer-reviewed
Ja