Anda belum login :: 23 Nov 2024 07:09 WIB
Home
|
Logon
Hidden
»
Administration
»
Collection Detail
Detail
Actor–Critic Models of Reinforcement Learning in the Basal Ganglia: From Natural to Artificial Rats
Oleh:
Khamassi, Mehdi
;
Lacheze, Loic
;
Girard, Benoit
;
Berthoz, Alain
;
Guillot, Agnes
Jenis:
Article from Journal - e-Journal
Dalam koleksi:
Adaptive Behavior vol. 13 no. 2 (Jun. 2005)
,
page 131–148.
Topik:
animat approach
;
TD learning
;
Actor–Critic model
;
S–R task
;
taxon navigation
Fulltext:
131.pdf
(399.91KB)
Isi artikel
Since 1995, numerous Actor–Critic architectures for reinforcement learning have been proposed as models of dopamine-like reinforcement learning mechanisms in the rat's basal ganglia. However, these models were usually tested in different tasks, and it is then difficult to compare their efficiency for an autonomous animat. We present here the comparison of four architectures in an animat as it performs the same reward-seeking task. This will illustrate the consequences of different hypotheses about the management of different Actor sub-modules and Critic units, and their more or less autonomously determined coordination. We show that the classical method of coordination of modules by mixture of experts, depending on each module’s performance, did not allow solving our task. Then we address the question of which principle should be applied efficiently to combine these units. Improvements for Critic modeling and accuracy of Actor–Critic models for a natural task are finally discussed in the perspective of our Psikharpax project—an artificial rat having to survive autonomously in unpredictable environments.
Opini Anda
Klik untuk menuliskan opini Anda tentang koleksi ini!
Kembali
Process time: 0.046875 second(s)