Deep Learning Models on CPUs: A Methodology for Efficient Training
Author
Abstract

GPUs have been favored for training deep learning models due to their highly parallelized architecture. As a result, most studies on training optimization focus on GPUs. There is often a trade-off, however, between cost and efficiency when deciding how to choose the proper hardware for training. In particular, CPU servers can be beneficial if training on CPUs was more efficient, as they incur fewer hardware update costs and better utilize existing infrastructure.

Year of Publication
2023
Journal
Journal of Machine Learning Theory, Applications and Practice
Volume
1
Date Published
04/2023
URL
https://www.journal.riverpublishers.com/index.php/JMLTAP/article/view/268
DOI
https://doi.org/10.13052/jmltapissn.2022.003
Google Scholar | BibTeX | XML | DOI