Anda belum login :: 27 Nov 2024 08:04 WIB
Detail
ArtikelProbabilistic Learning Algorithms and Optimality Theory  
Oleh: Keller, Frank ; Asudeh, Ash
Jenis: Article from Journal - ilmiah internasional
Dalam koleksi: Linguistic Inquiry (ada di JSTOR) vol. 33 no. 2 (2002), page 225-244.
Fulltext: vol 33 no 2 pp 225-244.pdf (2.43MB)
Ketersediaan
  • Perpustakaan PKBB
    • Nomor Panggil: 405/LII/33
    • Non-tandon: 1 (dapat dipinjam: 0)
    • Tandon: tidak ada
    Lihat Detail Induk
Isi artikelThis article provides a critical assessment of the Gradual Learning Algorithm (GLA) for probabilistic optimality-theoretic (OT) grammars proposed by Boersma and Hayes (2001). We discuss the limitations of a standard algorithm for OT learning and outline how the GLA attempts to overcome these limitations. We point out a number of serious shortcomings with the GLA: (a) A methodological problem is that the GLA has not been tested on unseen data, which is standard practice in computational language learning. (b) We provide counterexamples, that is, attested data sets that the GLA is not able to learn. (c) Essential algorithmic properties of the GLA (correctness and convergence) have not been proven formally. (d) By modeling frequency distributions in the grammar, the GLA conflates the notions of competence and performance. This leads to serious conceptual problems, as OT crucially relies on the competence/performance distinction.
Opini AndaKlik untuk menuliskan opini Anda tentang koleksi ini!

Kembali
design
 
Process time: 0.015625 second(s)