Anda belum login :: 23 Nov 2024 08:12 WIB
Detail
ArtikelMarkoviari Architectural Bias of Recurrent Neural Networks  
Oleh: Tino, P. ; Cernansky, M. ; Benuskova, L.
Jenis: Article from Journal - ilmiah internasional
Dalam koleksi: IEEE Transactions on Neural Networks vol. 15 no. 1 (Jan. 2004), page 6-15.
Topik: Architectures; markovian; architectural bias; neural networks
Ketersediaan
  • Perpustakaan Pusat (Semanggi)
    • Nomor Panggil: II36.10
    • Non-tandon: 1 (dapat dipinjam: 0)
    • Tandon: tidak ada
    Lihat Detail Induk
Isi artikelIn this paper, we elaborate upon the claim that clustering in the recurrent layer of recurrent neural networks (RNN s) reflects meaningful information processing states even prior to training. By concentrating on activation clusters in RNN s, while not throwing away the continuous state space network dynamics, we extract predictive models that we call neural prediction machines (NPM s). When RNN s with sigmoid activation functions are initialized with small weights (a common technique in the RNN community), the clusters of recurrent activations emerging prior to training are indeed meaningful and correspond to Markov prediction contexts. In this case, the extracted NPMs correspond to a class of Markov models, called variable memory length Markov models (VLMM s). In order to appreciate how much information has really been induced during the training, the RNN performance should always be compared with that of VLMMs and NPM s extracted before training as the "null" base models. Our arguments are supported by experiments on a chaotic symbolic sequence and a context - free language with a deep recursive structure.
Opini AndaKlik untuk menuliskan opini Anda tentang koleksi ini!

Kembali
design
 
Process time: 0.0625 second(s)