arxivst stuff from arxiv that you should probably bookmark

Learning from Demonstrations for Real World Reinforcement Learning

Abstract · Apr 12, 2017 12:44 ·

control atari simulator hester agent dqfd policies dqn cs-ai cs-lg

Arxiv Abstract

  • Todd Hester
  • Matej Vecerik
  • Olivier Pietquin
  • Marc Lanctot
  • Tom Schaul
  • Bilal Piot
  • Andrew Sendonaris
  • Gabriel Dulac-Arnold
  • Ian Osband
  • John Agapiou
  • Joel Z. Leibo
  • Audrunas Gruslys

Deep reinforcement learning (RL) has achieved several high profile successes in difficult control problems. However, these algorithms typically require a huge amount of data before they reach reasonable performance. In fact, their performance during learning can be extremely poor. This may be acceptable for a simulator, but it severely limits the applicability of deep RL to many real-world tasks, where the agent must learn in the real environment. In this paper we study a setting where the agent may access data from previous control of the system. We present an algorithm, Deep Q-learning from Demonstrations (DQfD), that leverages this data to massively accelerate the learning process even from relatively small amounts of demonstration data. DQfD works by combining temporal difference updates with large-margin classification of the demonstrator’s actions. We show that DQfD has better initial performance than Deep Q-Networks (DQN) on 40 of 42 Atari games and it receives more average rewards than DQN on 27 of 42 Atari games. We also demonstrate that DQfD learns faster than DQN even when given poor demonstration data.

Read the paper (pdf) »