DOI QR코드

DOI QR Code

Learning Less Random to Learn Better in Deep Reinforcement Learning with Noisy Parameters

  • Received : 2019.07.10
  • Accepted : 2019.07.27
  • Published : 2019.07.31

Abstract

In terms of deep Reinforcement Learning (RL), exploration can be worked stochastically in the action of a state space. On the other hands, exploitation can be done the proportion of well generalization behaviors. The balance of exploration and exploitation is extremely important for better results. The randomly selected action with ε-greedy for exploration has been regarded as a de facto method. There is an alternative method to add noise parameters into a neural network for richer exploration. However, it is not easy to predict or detect over-fitting with the stochastically exploration in the perturbed neural network. Moreover, the well-trained agents in RL do not necessarily prevent or detect over-fitting in the neural network. Therefore, we suggest a novel design of a deep RL by the balance of the exploration with drop-out to reduce over-fitting in the perturbed neural networks.

Keywords