On the challenge of training small scale neural networks on large scale computing systems

  • We present a novel approach of distributing small-to mid-scale neural networks onto modern parallel architectures. In this context we discuss the induced challenges and possible solutions. We provide a detailed theoretical analysis with respect to space and time complexities and reinforce our computation model with evaluations which show a performance gain over state of the art approaches.

Export metadata

Additional Services

Share in Twitter Search Google Scholar
Metadaten
Author:Darius Malysiak, Matthias Grimm, Uwe Handmann
URL:https://ieeexplore.ieee.org/document/7382935
DOI:https://doi.org/10.1109/CINTI.2015.7382935
ISBN:978-1-4673-8520-6
Parent Title (English):16th IEEE International Symposium on Computational Intelligence and Informatics (CINTI)
Document Type:Conference Proceeding
Language:English
Year of Completion:2015
Contributing Corporation:IEEE
Release Date:2019/07/03
Pagenumber:12
First Page:273
Last Page:284
Institutes:Fachbereich 1 - Institut Informatik
DDC class:000 Allgemeines, Informatik, Informationswissenschaft / 004 Informatik
Licence (German):License LogoNo Creative Commons