Anda belum login :: 27 Nov 2024 05:23 WIB
Home
|
Logon
Hidden
»
Administration
»
Collection Detail
Detail
Optimal Training Sets for Bayesian Prediction of MeSH® Assignment
Oleh:
Sohn, Sunghwan
;
Won, Kim
;
Comeau, Donald C.
;
Wilbur, W. John
Jenis:
Article from Journal - ilmiah internasional
Dalam koleksi:
JAMIA ( Journal Of the American Medical Informatics Association ) vol. 15 no. 4 (Jul. 2008)
,
page 546.
Ketersediaan
Perpustakaan FK
Nomor Panggil:
J43.K.2008.01
Non-tandon:
1 (dapat dipinjam: 0)
Tandon:
tidak ada
Lihat Detail Induk
Isi artikel
Objectives: The aim of this study was to improve naïve Bayes prediction of Medical Subject Headings (MeSH) assignment to documents using optimal training sets found by an active learning inspired method. Design: The authors selected 20 MeSH terms whose occurrences cover a range of frequencies. For each MeSH term, they found an optimal training set, a subset of the whole training set. An optimal training set consists of all documents including a given MeSH term (C 1 class) and those documents not including a given MeSH term (C –1 class) that are closest to the C 1 class. These small sets were used to predict MeSH assignments in the MEDLINE® database. Measurements: Average precision was used to compare MeSH assignment using the naïve Bayes learner trained on the whole training set, optimal sets, and random sets. The authors compared 95% lower confidence limits of average precisions of naïve Bayes with upper bounds for average precisions of a K-nearest neighbor (KNN) classifier. Results: For all 20 MeSH assignments, the optimal training sets produced nearly 200% improvement over use of the whole training sets. In 17 of those MeSH assignments, naïve Bayes using optimal training sets was statistically better than a KNN. In 15 of those, optimal training sets performed better than optimized feature selection. Overall naïve Bayes averaged 14% better than a KNN for all 20 MeSH assignments. Using these optimal sets with another classifier, C-modified least squares (CMLS), produced an additional 6% improvement over naïve Bayes. Conclusion: Using a smaller optimal training set greatly improved learning with naïve Bayes. The performance is superior to a KNN. The small training set can be used with other sophisticated learning methods, such as CMLS, where using the whole training set would not be feasible.
Opini Anda
Klik untuk menuliskan opini Anda tentang koleksi ini!
Kembali
Process time: 0.015625 second(s)