Exploring Deep Recurrent Q-Learning for Navigation in a 3D Environment

Research output: Contribution to journalJournal articleResearchpeer-review

50 Downloads (Pure)

Abstract

Learning to navigate in 3D environments from raw sensory input is an important step towards bridging the gap between human players and artificial intelligence in digital games. Recent advances in deep reinforcement learning have seen success in teaching agents to play Atari 2600 games from raw pixel information where the environment is always fully observable by the agent. This is not true for first-person 3D navigation tasks. Instead, the agent is limited by its field of view which limits its ability to make optimal decisions in the environment. This paper explores using a Deep Recurrent Q-Network implementation with a long short-term memory layer for dealing with such tasks by allowing an agent to process recent frames and gain a memory of the environment. An agent was trained in a 3D first-person labyrinth-like environment for 2 million frames. Informal observations indicate that the trained agent navigated in the right direction but was unable to find the target of the environment.
Original languageEnglish
Article numbere3
Book seriesEAI Endrosed Trasactions on Creative Technologies
Volume18
Issue number14
Number of pages5
ISSN2409-9708
DOIs
Publication statusPublished - 2018

Fingerprint

Navigation
Reinforcement learning
Artificial intelligence
Teaching
Pixels
Data storage equipment

Keywords

  • Reinforcement Learning
  • Deep Learning
  • Q-Learning
  • Deep Recurrent Q-Learning
  • Artificial Intelligence
  • Navigation
  • Game Intelligence

Cite this

@article{b4f013caa94446acb0ab881eeb91038d,
title = "Exploring Deep Recurrent Q-Learning for Navigation in a 3D Environment",
abstract = "Learning to navigate in 3D environments from raw sensory input is an important step towards bridging the gap between human players and artificial intelligence in digital games. Recent advances in deep reinforcement learning have seen success in teaching agents to play Atari 2600 games from raw pixel information where the environment is always fully observable by the agent. This is not true for first-person 3D navigation tasks. Instead, the agent is limited by its field of view which limits its ability to make optimal decisions in the environment. This paper explores using a Deep Recurrent Q-Network implementation with a long short-term memory layer for dealing with such tasks by allowing an agent to process recent frames and gain a memory of the environment. An agent was trained in a 3D first-person labyrinth-like environment for 2 million frames. Informal observations indicate that the trained agent navigated in the right direction but was unable to find the target of the environment.",
keywords = "Reinforcement Learning, Deep Learning, Q-Learning, Deep Recurrent Q-Learning, Artificial Intelligence, Navigation, Game Intelligence",
author = "Rasmus Brejl and Hendrik Purwins and Henrik Schoenau-Fog",
year = "2018",
doi = "10.4108/eai.16-1-2018.153641",
language = "English",
volume = "18",
journal = "EAI Endrosed Trasactions on Creative Technologies",
issn = "2409-9708",
publisher = "EAI - European Alliance for Innovation",
number = "14",

}

Exploring Deep Recurrent Q-Learning for Navigation in a 3D Environment. / Brejl, Rasmus; Purwins, Hendrik; Schoenau-Fog, Henrik .

In: EAI Endrosed Trasactions on Creative Technologies, Vol. 18, No. 14, e3, 2018.

Research output: Contribution to journalJournal articleResearchpeer-review

TY - JOUR

T1 - Exploring Deep Recurrent Q-Learning for Navigation in a 3D Environment

AU - Brejl, Rasmus

AU - Purwins, Hendrik

AU - Schoenau-Fog, Henrik

PY - 2018

Y1 - 2018

N2 - Learning to navigate in 3D environments from raw sensory input is an important step towards bridging the gap between human players and artificial intelligence in digital games. Recent advances in deep reinforcement learning have seen success in teaching agents to play Atari 2600 games from raw pixel information where the environment is always fully observable by the agent. This is not true for first-person 3D navigation tasks. Instead, the agent is limited by its field of view which limits its ability to make optimal decisions in the environment. This paper explores using a Deep Recurrent Q-Network implementation with a long short-term memory layer for dealing with such tasks by allowing an agent to process recent frames and gain a memory of the environment. An agent was trained in a 3D first-person labyrinth-like environment for 2 million frames. Informal observations indicate that the trained agent navigated in the right direction but was unable to find the target of the environment.

AB - Learning to navigate in 3D environments from raw sensory input is an important step towards bridging the gap between human players and artificial intelligence in digital games. Recent advances in deep reinforcement learning have seen success in teaching agents to play Atari 2600 games from raw pixel information where the environment is always fully observable by the agent. This is not true for first-person 3D navigation tasks. Instead, the agent is limited by its field of view which limits its ability to make optimal decisions in the environment. This paper explores using a Deep Recurrent Q-Network implementation with a long short-term memory layer for dealing with such tasks by allowing an agent to process recent frames and gain a memory of the environment. An agent was trained in a 3D first-person labyrinth-like environment for 2 million frames. Informal observations indicate that the trained agent navigated in the right direction but was unable to find the target of the environment.

KW - Reinforcement Learning

KW - Deep Learning

KW - Q-Learning

KW - Deep Recurrent Q-Learning

KW - Artificial Intelligence

KW - Navigation

KW - Game Intelligence

U2 - 10.4108/eai.16-1-2018.153641

DO - 10.4108/eai.16-1-2018.153641

M3 - Journal article

VL - 18

JO - EAI Endrosed Trasactions on Creative Technologies

JF - EAI Endrosed Trasactions on Creative Technologies

SN - 2409-9708

IS - 14

M1 - e3

ER -