Anda belum login :: 10 Dec 2023 23:40 WIB
Using Localizing Learning to Improve Supervised Learning Algortihms
Article from Journal - ilmiah internasional
IEEE Transactions on Neural Networks vol. 12 no. 5 (2001)
Perpustakaan Pusat (Semanggi)
1 (dapat dipinjam: 0)
Lihat Detail Induk
Slow learning of neural - network function approximators can frequently be attributed to interference, which occurs when learning in one area of the input space causes unlearning in another area. To mitigate the effect of unlearning, this paper develops an algorithm that adjusts the weights of an arbitrary, nonlinearly parameterized network such that the potential for future interference during learning is reduced. This is accomplished by the reduction of a biobjective cost function that combines the approximation error and a term that measures interference. An analysis of the algorithm's convergence properties shows that learning with this algorithm reduces future unlearning. The algorithm can be used either during online learning or can be used to condition a network to have immunity from interference during a future learning stage. A simple example demonstrates how interference manifests itself in a network and how less interference can lead to more efficient learning. Simulations demonstrate how this new learning algorithm speeds up the training in various situations due to the extra cost function term.
Klik untuk menuliskan opini Anda tentang koleksi ini!
Copyright © 2006, 2007
Unika Atma Jaya
, all rights reserved
Process time: 0.015625 second(s)