Anda belum login :: 23 Nov 2024 03:20 WIB
Home
|
Logon
Hidden
»
Administration
»
Collection Detail
Detail
Accelerating The Training of Feedforward Neural Networks Using Generalized Hebbian Rules for Initializing The Internal Representations
Oleh:
Karayiannis, N. B.
Jenis:
Article from Journal - ilmiah internasional
Dalam koleksi:
IEEE Transactions on Neural Networks vol. 7 no. 2 (1996)
,
page 419-426.
Topik:
TRAINING
;
accelerating
;
training
;
neural networks
;
hebbian rules
;
internal representations
Ketersediaan
Perpustakaan Pusat (Semanggi)
Nomor Panggil:
II36.1
Non-tandon:
1 (dapat dipinjam: 0)
Tandon:
tidak ada
Lihat Detail Induk
Isi artikel
This paper presents an unsupervised learning scheme for initializing the internal representations of feedforward neural networks, which accelerates the convergence of supervised learning algorithms. It is proposed in this paper that the initial set of internal representations can be formed through a bottom - up unsupervised learning process applied before the top - down supervised training algorithm. The synaptic weights that connect the input of the network with the hidden units can be determined through linear or nonlinear variations of a generalized Hebbian learning rule, known as Oja's rule. Various generalized Hebbian rules were experimentally tested and evaluated in terms of their effect on the convergence of the supervised training process. Several experiments indicated that the use of the proposed initialization of the internal representations significantly improves the convergence of gradient - descent - based algorithms used to perform nontrivial training tasks. The improvement of the convergence becomes significant as the size and complexity of the training task increase.
Opini Anda
Klik untuk menuliskan opini Anda tentang koleksi ini!
Kembali
Process time: 0.03125 second(s)