Anda belum login :: 23 Apr 2025 07:56 WIB
Detail
ArtikelRepairs to GLVQ : A New Family of Competitive Learning Schemes  
Oleh: Bezdek, J. C. ; Karayiannis, N. B. ; Pal, N. R. ; Hathaway, R. J. ; Pai, Pin-I
Jenis: Article from Journal - ilmiah internasional
Dalam koleksi: IEEE Transactions on Neural Networks vol. 7 no. 5 (1996), page 1062-1071.
Topik: LEARNING; GLVQ; new family; competitive learning; schemes
Ketersediaan
  • Perpustakaan Pusat (Semanggi)
    • Nomor Panggil: II36.1
    • Non-tandon: 1 (dapat dipinjam: 0)
    • Tandon: tidak ada
    Lihat Detail Induk
Isi artikelFirst, we identify an algorithmic defect of the generalized learning vector quantization (GLVQ) scheme that causes it to behave erratically for a certain scaling of the input data. We show that GLVQ can behave incorrectly because its learning rates are reciprocally dependent on the sum of squares of distances from an input vector to the node weight vectors. Finally, we propose a new family of models - the GLVQ - F family - that remedies the problem. We derive competitive learning algorithms for each member of the GLVQ - F model and prove that they are invariant to all scalings of the data. We show that GLVQ -F offers a wide range of learning models since it reduces to LVQ as its weighting exponent (a parameter of the algorithm) approaches one from above. As this parameter increases, GLVQ - F then transitions to a model in which either all nodes may be excited according to their (inverse) distances from an input or in which the winner is excited while losers are penalized. And as this parameter increases without limit, GLVQ - F updates all nodes equally. We illustrate the failure of GLVQ and success of GLVQ - F with the IRIS data.
Opini AndaKlik untuk menuliskan opini Anda tentang koleksi ini!

Kembali
design
 
Process time: 0 second(s)