Anda belum login :: 11 Jun 2025 18:21 WIB
Home
|
Logon
Hidden
»
Administration
»
Collection Detail
Detail
Averaging, Maximum Penalized Likehood and Bayesian Estimation for Improving Gaussian Mixture Probability Density Estimates
Oleh:
Tresp, V.
;
Ormoneit, D.
Jenis:
Article from Journal - ilmiah internasional
Dalam koleksi:
IEEE Transactions on Neural Networks vol. 9 no. 4 (1998)
,
page 639-650.
Topik:
gaussian
;
averaging
;
maximum penalized
;
bayesian
;
gaussian mixture
;
density
Ketersediaan
Perpustakaan Pusat (Semanggi)
Nomor Panggil:
II36.3
Non-tandon:
1 (dapat dipinjam: 0)
Tandon:
tidak ada
Lihat Detail Induk
Isi artikel
We apply the idea of averaging ensembles of estimators to probability density estimation. In particular, we use Gaussian mixture models which are important components in many neural - network applications. We investigate the performance of averaging using three data sets. For comparison, we employ two traditional regularization approaches, i. e., a maximum penalized likelihood approach and a Bayesian approach. In the maximum penalized likelihood approach we use penalty functions derived from conjugate Bayesian priors such that an expectation maximization (EM) algorithm can be used for training. In all experiments, the maximum penalized likelihood approach and averaging improved performance considerably if compared to a maximum likelihood approach. In two of the experiments, the maximum penalized likelihood approach outperformed averaging. In one experiment averaging was clearly superior. Our conclusion is that maximum penalized likelihood gives good results if the penalty term in the cost function is appropriate for the particular problem. If this is not the case, averaging is superior since it shows greater robustness by not relying on any particular prior assumption. The Bayesian approach worked very well on a low - dimensional toy problem but failed to give good performance in higher dimensional problems.
Opini Anda
Klik untuk menuliskan opini Anda tentang koleksi ini!
Kembali
Process time: 0 second(s)