Publication Details

Structure in Deep Reinforcement Learning

A Survey and Open Problems

authored by
Aditya Mohan, Amy Zhang, Marius Lindauer
Abstract

Reinforcement Learning (RL), bolstered by the expressive capabilities of Deep Neural Networks (DNNs) for function approximation, has demonstrated considerable success in numerous applications. However, its practicality in addressing a wide range of real-world scenarios, characterized by diverse and unpredictable dynamics, noisy signals, and large state and action spaces, remains limited. This limitation stems from issues such as poor data efficiency, limited generalization capabilities, a lack of safety guarantees, and the absence of interpretability, among other factors. To overcome these challenges and improve performance across these crucial metrics, one promising avenue is to incorporate additional structural information about the problem into the RL learning process. Various sub-fields of RL have proposed methods for incorporating such inductive biases. We amalgamate these diverse methodologies under a unified framework, shedding light on the role of structure in the learning problem, and classify these methods into distinct patterns of incorporating structure. By leveraging this comprehensive framework, we provide valuable insights into the challenges associated with structured RL and lay the groundwork for a design pattern perspective on RL research. This novel perspective paves the way for future advancements and aids in the development of more effective and efficient RL algorithms that can potentially handle real-world scenarios better.

Organisation(s)
Machine Learning Section
Institute of Artificial Intelligence
External Organisation(s)
Meta AI
University of Texas at Austin
Type
Article
Journal
Journal of Artificial Intelligence Research
ISSN
1076-9757
Publication date
20.01.2024
Publication status
Accepted/In press
Peer reviewed
Yes
Electronic version(s)
https://arxiv.org/abs/2306.16021 (Access: Open)