Anda belum login :: 27 Nov 2024 07:08 WIB
Home
|
Logon
Hidden
»
Administration
»
Collection Detail
Detail
Co-evolution of Shaping Rewards and Meta-Parameters in Reinforcement Learning
Oleh:
Christensen, H. I.
;
Elfwing, Stefan
;
Uchibe, Eiji
;
Doya, Kenji
Jenis:
Article from Journal - e-Journal
Dalam koleksi:
Adaptive Behavior vol. 16 no. 6 (Dec. 2008)
,
page 400–412.
Topik:
reinforcement learning
;
shaping rewards
;
meta-parameters
;
genetic algorithms
Fulltext:
400.pdf
(561.97KB)
Isi artikel
In this article, we explore an evolutionary approach to the optimization of potential-based shaping rewards and meta-parameters in reinforcement learning. Shaping rewards is a frequently used approach to increase the learning performance of reinforcement learning, with regards to both initial performance and convergence speed. Shaping rewards provide additional knowledge to the agent in the form of richer reward signals, which guide learning to high-rewarding states. Reinforcement learning depends critically on a few meta-parameters that modulate the learning updates or the exploration of the environment, such as the learning rate a, the discount factor of future rewards ?, and the temperature t that controls the trade-off between exploration and exploitation in softmax action selection. We validate the proposed approach in simulation using the mountain-car task. We also transfer shaping rewards and meta-parameters, evolutionarily obtained in simulation, to hardware, using a robotic foraging task.
Opini Anda
Klik untuk menuliskan opini Anda tentang koleksi ini!
Kembali
Process time: 0.015625 second(s)