Anda belum login :: 23 Jul 2025 12:01 WIB
Home
|
Logon
Hidden
»
Administration
»
Collection Detail
Detail
Convergence of error-driven ranking algorithms
Oleh:
Magri, Giorgio
Jenis:
Article from Journal - ilmiah internasional
Dalam koleksi:
Phonology (Full Text) vol. 26 no. 02 (Aug. 2009)
,
page 213-269.
Fulltext:
Magri_Giorgio, p. 213-269.pdf
(836.53KB)
Isi artikel
According to the OT error-driven ranking model of language acquisition, the learner performs a sequence of slight re-rankings triggered by mistakes on the incoming stream of data, until it converges to a ranking that makes no more mistakes. Two classical examples are Tesar & Smolensky’s (1998) Error-Driven Constraint Demotion (EDCD) and Boersma’s (1998) Gradual Learning Algorithm (GLA). Yet EDCD only performs constraint demotion, and is thus shown to predict a ranking dynamics which is too simple from a modelling perspective. The GLA performs constraint promotion too, but has been shown not to converge. This paper develops a complete theory of convergence of error-driven ranking algorithms that perform both constraint demotion and promotion. In particular, it shows that convergent constraint promotion can be achieved (with an error-bound that compares well to that of EDCD) through a proper calibration of the amount by which constraints are promoted.
Opini Anda
Klik untuk menuliskan opini Anda tentang koleksi ini!
Kembali
Process time: 0 second(s)