Anda belum login :: 24 Nov 2024 11:36 WIB
Detail
ArtikelSpeaking Mode Variability in Multimodal Speech Production  
Oleh: Vatikiotis-Bateson, E. ; Yehia, H. C.
Jenis: Article from Journal - ilmiah internasional
Dalam koleksi: IEEE Transactions on Neural Networks vol. 13 no. 4 (2002), page 894-899.
Topik: SPEECH; speaking mode; variability; multimodal speech; production
Ketersediaan
  • Perpustakaan Pusat (Semanggi)
    • Nomor Panggil: II36.7A
    • Non-tandon: 1 (dapat dipinjam: 0)
    • Tandon: tidak ada
    Lihat Detail Induk
Isi artikelThe speech acoustics and the phonetically relevant motion of the face during speech are determined by the time - varying behaviour of the vocal tract. A benefit of this linkage is that we are able to estimate face motion from the spectral acoustics during speech production using simple neural networks. Thus far, however, the scope of reliable estimation has been limited to individual sentences ; network training degrades sharply when multiple sentences are analyzed together. While there is a number of potential avenues for improving network generalization, this paper investigates the possibility that the experimental recording procedures introduce artificial boundary constraints between sentence length utterances. Specifically, the same sentence materials were recorded individually and as part of longer, paragraph length utterances. The scope of reliable network estimation was found to depend both on the length of the utterance (sentence versus paragraph) and, not surprisingly, on phonetic content : estimation of face motion from speech acoustics was reliable for larger sentence training sets when sentences were recorded in continuous paragraph readings ; and greater phonetic diversity reduced reliability.
Opini AndaKlik untuk menuliskan opini Anda tentang koleksi ini!

Kembali
design
 
Process time: 0.015625 second(s)