Anda belum login :: 17 Feb 2025 13:39 WIB
Home
|
Logon
Hidden
»
Administration
»
Collection Detail
Detail
On The Optimality of Neural-Network Approximation Using Incremental Algorithms
Oleh:
Meir, R.
;
Maiorov, V. E.
Jenis:
Article from Journal - ilmiah internasional
Dalam koleksi:
IEEE Transactions on Neural Networks vol. 11 no. 2 (2000)
,
page 323-337.
Topik:
algorithms
;
optimality
;
neural - network approximation
;
incremental algorithms
Ketersediaan
Perpustakaan Pusat (Semanggi)
Nomor Panggil:
II36.4
Non-tandon:
1 (dapat dipinjam: 0)
Tandon:
tidak ada
Lihat Detail Induk
Isi artikel
The problem of approximating functions by neural networks using incremental algorithms is studied. For functions belonging to a rather general class, characterized by certain smoothness properties with respect to the L2 norm, we compute upper bounds on the approximation error where error is measured by the Lq norm, 1 & les ; q & les ; 8. These results extend previous work, applicable in the case q = 2, and provide an explicit algorithm to achieve the derived approximation error rate. In the range q & les ; 2 near - optimal rates of convergence are demonstrated. A gap remains, however, with respect to a recently established lower bound in the case q > 2, although the rates achieved are provably better than those obtained by optimal linear approximation. Extensions of the results from the L2 norm to Lp are also discussed. A further interesting conclusion from our results is that no loss of generality is suffered using networks with positive hidden - to - output weights. Moreover, explicit bounds on the size of the hidden - to - output weights are established, which are sufficient to guarantee the established convergence rates.
Opini Anda
Klik untuk menuliskan opini Anda tentang koleksi ini!
Kembali
Process time: 0.015625 second(s)