Anda belum login :: 16 Apr 2025 14:39 WIB
Home
|
Logon
Hidden
»
Administration
»
Collection Detail
Detail
Upper Bounds on The Number of Hidden Neurons in Feedforward Networks With Arbitrary Bounded Nonlinear Activation Functions
Oleh:
Huang, Guang-Bin
;
Babri, H. A.
Jenis:
Article from Journal - ilmiah internasional
Dalam koleksi:
IEEE Transactions on Neural Networks vol. 9 no. 1 (1998)
,
page 224-229.
Topik:
NEURONS
;
hidden neurons
;
networks
;
arbitrary
;
non linear activation
Ketersediaan
Perpustakaan Pusat (Semanggi)
Nomor Panggil:
II36.3
Non-tandon:
1 (dapat dipinjam: 0)
Tandon:
tidak ada
Lihat Detail Induk
Isi artikel
It is well known that standard single - hidden layer feedforward networks (SLFNs) with at most N hidden neurons (including biases) can learn N distinct samples (xi,ti) with zero error, and the weights connecting the input neurons and the hidden neurons can be chosen “almost” arbitrarily. However, these results have been obtained for the case when the activation function for the hidden neurons is the signum function. This paper rigorously proves that standard single - hidden layer feedforward networks (SLFN s) with at most N hidden neurons and with any bounded non linear activation function which has a limit at one infinity can learn N distinct samples (xi , ti) with zero error. The previous method of arbitrarily choosing weights is not feasible for any SLFN. The proof of our result is constructive and thus gives a method to directly find the weights of the standard SLFNs with any such bounded non linear activation function as opposed to iterative training algorithms in the literature.
Opini Anda
Klik untuk menuliskan opini Anda tentang koleksi ini!
Kembali
Process time: 0.015625 second(s)