Anda belum login :: 23 Nov 2024 12:03 WIB
Home
|
Logon
Hidden
»
Administration
»
Collection Detail
Detail
Asympototic Statistical Theory of Overtraining and Cross-Validation
Oleh:
Amari, S.
;
Finke, M.
;
Murata, N.
;
Muller, K.-R.
;
Yang, H. H.
Jenis:
Article from Journal - ilmiah internasional
Dalam koleksi:
IEEE Transactions on Neural Networks vol. 8 no. 5 (1997)
,
page 985-996.
Topik:
validation
;
asympototic
;
statistical theory
;
over training
;
cross - validation
Ketersediaan
Perpustakaan Pusat (Semanggi)
Nomor Panggil:
II36.2
Non-tandon:
1 (dapat dipinjam: 0)
Tandon:
tidak ada
Lihat Detail Induk
Isi artikel
A statistical theory for overtraining is proposed. The analysis treats general realizable stochastic neural networks, trained with Kullback - Leibler divergence in the asymptotic case of a large number of training examples. It is shown that the asymptotic gain in the generalization error is small if we perform early stopping, even if we have access to the optimal stopping time. Based on the cross-validation stopping we consider the ratio the examples should be divided into training and cross - validation sets in order to obtain the optimum performance. Although cross - validated early stopping is useless in the asymptotic region, it surely decreases the generalization error in the nonasymptotic region. Our large scale simulations done on a CM5 are in good agreement with our analytical findings.
Opini Anda
Klik untuk menuliskan opini Anda tentang koleksi ini!
Kembali
Process time: 0.015625 second(s)