Anda belum login :: 23 Nov 2024 17:42 WIB
Home
|
Logon
Hidden
»
Administration
»
Collection Detail
Detail
Magnified Gradient Function With Deterministic Weight Modification in Adaptive Learning
Oleh:
Ng, Sin-Chun
;
Cheung, Chi-Chung
;
Leung, Shu-Hung
Jenis:
Article from Journal - ilmiah internasional
Dalam koleksi:
IEEE Transactions on Neural Networks vol. 15 no. 6 (Nov. 2004)
,
page 1411-1423.
Topik:
BEHAVIOR MODIFICATION
;
magnified
;
gradient
;
function
;
deterministic weight
;
modification
;
adaptive learning
Ketersediaan
Perpustakaan Pusat (Semanggi)
Nomor Panggil:
II36.11
Non-tandon:
1 (dapat dipinjam: 0)
Tandon:
tidak ada
Lihat Detail Induk
Isi artikel
This work presents two novel approaches, backpropagation (BP) with magnified gradient function (MGFPROP) and deterministic weight modification (DWM), to speed up the convergence rate and improve the global convergence capability of the standard BP learning algorithm. The purpose of MGFPROP is to increase the convergence rate by magnifying the gradient function of the activation function, while the main objective of DWM is to reduce the system error by changing the weights of a multilayered feedforward neural network in a deterministic way. Simulation results show that the performance of the above two approaches is better than BP and other modified BP algorithms for a number of learning problems. Moreover, the integration of the above two approaches forming a new algorithm called MDPROP, can further improve the performance of MGFPROP and DWM. From our simulation results, the MDPROP algorithm always outperforms BP and other modified BP algorithms in terms of convergence rate and global convergence capability.
Opini Anda
Klik untuk menuliskan opini Anda tentang koleksi ini!
Kembali
Process time: 0.015625 second(s)