Anda belum login :: 27 Nov 2024 11:50 WIB
Home
|
Logon
Hidden
»
Administration
»
Collection Detail
Detail
Learning Efficiency of Redudant Neural Networks in Bayesian Estimation
Oleh:
Watanabe, S.
Jenis:
Article from Journal - ilmiah internasional
Dalam koleksi:
IEEE Transactions on Neural Networks vol. 12 no. 6 (2001)
,
page 1475-1486.
Topik:
Bayesian
;
learning efficiency
;
redudant
;
neural networks
;
bayesian estimation
Ketersediaan
Perpustakaan Pusat (Semanggi)
Nomor Panggil:
II36.6
Non-tandon:
1 (dapat dipinjam: 0)
Tandon:
tidak ada
Lihat Detail Induk
Isi artikel
This paper proves that the Bayesian stochastic complexity of a layered neural network is asymptotically smaller than that of a regular statistical model if it contains the true distribution. We consider a case when a three - layer perceptron with M input units, H hidden units and N output units is trained to estimate the true distribution represented by the model with H0 hidden units and prove that the stochastic complexity is asymptotically smaller than (1 / 2) {H0 ( M + N) + R} log n where n is the number of training samples and R is a function of H - H0, M, and N that is far smaller than the number of redundant parameters. Since the generalization error of Bayesian estimation is equal to the increase of stochastic complexity, it is smaller than (1 /2 n) {H0 ( M + N ) + R} if it has an asymptotic expansion. Based on the results, the difference between layered neural networks and regular statistical models is discussed from the statistical point of view.
Opini Anda
Klik untuk menuliskan opini Anda tentang koleksi ini!
Kembali
Process time: 0.015625 second(s)