The high-density computing requirements of machine learning (ML) is a challenging performance bottleneck. Limited by the sequential instruction execution system, traditional general purpose processors are not suitable for efficient ML. In this work, we present an ML system design methodology based on GPU and FPGA to tackle this problem. The core idea of our proposal is when designing an ML platform, we leverage the graphics processing unit (GPU)’s high-density computing to perform model training and exploit field programmable gate array (FPGA)’s low-latency to perform model inferencing. In between, we define a model converter, which enable transforming the model used by the training module to one that is used by inferencing module. We evaluated our approach through two use cases. The first is a handwritten digit recognition with convolutional neural network while the second use case is for predicting data center’s power usage effectiveness with deep neural network regression algorithm. The experimental results indicate that our solution can take advantages of GPU and FPGA’s parallel computing capacity to improve the efficiency of training and inferencing significantly. Meanwhile, the solution preserves the accuracy and the mean square error while converting the models between the different frameworks.
A hybrid GPU-FPGA based design methodology for enhancing machine learning applications performance
Additional Info
Source: https://link.springer.com/article/10.1007/s12652-019-01357-4
Attachment: Download Attachment
About the Author: Xu Liu
Brain-inspired computing
Deep learning
Machine learning
Embedded system
Image processing
Leave A Comment