Anda belum login :: 05 Jun 2025 04:41 WIB
Home
|
Logon
Hidden
»
Administration
»
Collection Detail
Detail
On The Geometric Convergence of Neural Approximation
Oleh:
Lavretsky, E.
Jenis:
Article from Journal - ilmiah internasional
Dalam koleksi:
IEEE Transactions on Neural Networks vol. 13 no. 2 (2002)
,
page 274-282.
Topik:
geometrical measurement
;
geometric
;
convergence
;
neural approximation
Ketersediaan
Perpustakaan Pusat (Semanggi)
Nomor Panggil:
II36.6
Non-tandon:
1 (dapat dipinjam: 0)
Tandon:
tidak ada
Lihat Detail Induk
Isi artikel
We give upper bounds rates of approximation of a set of functions from a real Hilbert space, using convex greedy iterations. The approximation method was originally proposed and analyzed by Jones (1992). Barron (1993) applied the method to the set of functions computable by single - hidden - layer feedforward neural networks. It was shown that the networks achieve an integrated squared error of order O (1/n), where n is the number of iterations, or equivalently, nodes in the network. Assuming that the functions to be approximated satisfy the so - called d - angular condition, we show that the corresponding rate of approximation of order O (qn) is achievable, where 0 & les ; q < 1. Therefore, for the set of functions considered, the reported geometrical rate of approximation is an improvement of Maurey - Jones - Barron's upper bound result. In the case of orthonormal convex greedy approximations, the d - angular condition is shown to be equivalent to the geometrically decaying expansion coefficients. In finite dimensions the d - angular condition is proven to take place for a wide class of functions.
Opini Anda
Klik untuk menuliskan opini Anda tentang koleksi ini!
Kembali
Process time: 0.03125 second(s)