Anda belum login :: 24 Nov 2024 00:20 WIB
Home
|
Logon
Hidden
»
Administration
»
Collection Detail
Detail
Analysis of Speedup as Function of Block Size and Cluster Size for Parallel Feed-Forward Neural Networks on A Beowulf Cluster
Oleh:
Morchen, F.
Jenis:
Article from Journal - ilmiah internasional
Dalam koleksi:
IEEE Transactions on Neural Networks vol. 15 no. 2 (Mar. 2004)
,
page 515-527.
Topik:
cluster
;
speedup
;
function
;
block size
;
clustering size
;
neural netwoks
;
beowulf cluster
Ketersediaan
Perpustakaan Pusat (Semanggi)
Nomor Panggil:
II36.10
Non-tandon:
1 (dapat dipinjam: 0)
Tandon:
tidak ada
Lihat Detail Induk
Isi artikel
The performance of feed - forward neural networks trained with the backpropagation algorithm on a dedicated Beowulf cluster is analyzed. The concept of training set parallelism is applied. A new model for run time and speedup prediction is developed. With the model the speedup and efficiency of one iteration of the neural networks can be estimated as a function of block size and cluster size. The model is applied to three example problems representing different applications and network architectures. The estimation of the model has a higher accuracy than traditional methods for run time estimation and can be efficiently calculated. Experiments show that speedup of one iteration does not necessarily translate to a shorter training time toward a given error level. To overcome this problem a heuristic extension to training set parallelism called weight averaging is developed. The results show that training in parallel should only be done on clusters with high performance network connections or a multiprocessor machine. A rule of thumb is given for how much network performance of the cluster is needed to achieve speedup of the training time for a neural network.
Opini Anda
Klik untuk menuliskan opini Anda tentang koleksi ini!
Kembali
Process time: 0.03125 second(s)