Anda belum login :: 27 Apr 2025 12:21 WIB
Home
|
Logon
Hidden
»
Administration
»
Collection Detail
Detail
New Results on Recurrent Network Training : Unifying The Algorithms and Accelerating Convergences
Oleh:
Atiya, A. F.
;
Parlos, A. G.
Jenis:
Article from Journal - ilmiah internasional
Dalam koleksi:
IEEE Transactions on Neural Networks vol. 11 no. 3 (2000)
,
page 697-709.
Topik:
TRAINING
;
recurrent network
;
training
;
algorithms
;
convergences
Ketersediaan
Perpustakaan Pusat (Semanggi)
Nomor Panggil:
II36.4
Non-tandon:
1 (dapat dipinjam: 0)
Tandon:
tidak ada
Lihat Detail Induk
Isi artikel
How to efficiently train recurrent networks remains a challenging and active research topic. Most of the proposed training approaches are based on computational ways to efficiently obtain the gradient of the error function, and can be generally grouped into five major groups. In this study we present a derivation that unifies these approaches. We demonstrate that the approaches are only five different ways of solving a particular matrix equation. The second goal of this paper is develop a new algorithm based on the insights gained from the novel formulation. The new algorithm, which is based on approximating the error gradient, has lower computational complexity in computing the weight update than the competing techniques for most typical problems. In addition, it reaches the error minimum in a much smaller number of iterations. A desirable characteristic of recurrent network training algorithms is to be able to update the weights in an online fashion. We have also developed an online version of the proposed algorithm, that is based on updating the error gradient approximation in a recursive manner.
Opini Anda
Klik untuk menuliskan opini Anda tentang koleksi ini!
Kembali
Process time: 0.015625 second(s)