Anda belum login :: 16 Jul 2024 10:50 WIB
Detail
ArtikelSequence Encoders Enable Large-Scale Lexical Modeling: Reply to Bowers and Davis (2009)  
Oleh: Sibley, Daragh E. ; Kello, Christopher T. ; Plaut, David C. ; Elman, Jeffrey L.
Jenis: Article from Journal - ilmiah internasional
Dalam koleksi: Cognitive Science vol. 33 no. 7 (Sep. 2009), page 1187–1191.
Topik: Large-scale modeling; Sequence encoder; Orthography; Phonology; Word forms
Fulltext: 02. Sequence Encoders Enable Large-Scale Lexical Modeling - Reply to Bowers and Davis (2009).pdf (43.31KB)
Isi artikelSibley, Kello, Plaut, and Elman (2008) proposed the sequence encoder as a model that learns fixed-width distributed representations of variable-length sequences. In doing so, the sequence encoder overcomes problems that have restricted models of word reading and recognition to processing only monosyllabic words. Bowers and Davis (2009) recently claimed that the sequence encoder does not actually overcome the relevant problems, and hence it is not a useful component of large-scale word-reading models. In this reply, it is noted that the sequence encoder has facilitated the creation of large-scale word-reading models. The reasons for this success are explained and stand as counterarguments to claims made by Bowers and Davis.
Opini AndaKlik untuk menuliskan opini Anda tentang koleksi ini!

Kembali
design
 
Process time: 0.015625 second(s)