On the challenge of training small scale neural networks on large scale computing systems

  • We present a novel approach of distributing small-to mid-scale neural networks onto modern parallel architectures. In this context we discuss the induced challenges and possible solutions. We provide a detailed theoretical analysis with respect to space and time complexities and reinforce our computation model with evaluations which show a performance gain over state of the art approaches.

Export metadata

Additional Services

Share in Twitter Search Google Scholar
Author:Darius Malysiak, Matthias Grimm, Uwe Handmann
Parent Title (English):16th IEEE International Symposium on Computational Intelligence and Informatics (CINTI)
Document Type:Conference Proceeding
Year of Completion:2015
Contributing Corporation:IEEE
Release Date:2019/07/03
First Page:273
Last Page:284
Institutes:Fachbereich 1 - Institut Informatik
DDC class:000 Allgemeines, Informatik, Informationswissenschaft / 004 Informatik
Licence (German):License LogoNo Creative Commons