Accelerate the Training Process of BP neural Network with CUDA Technology


Authors

Yinfen Xie - School of Information Science and Technology, Linyi University, Linyi, Shandong 276005, P. R. China


Abstract

NVIDIA GPUs is a typical Stream Processor device, and have a high performance of floating-point operations. CUDA uses a bran-new computing architecture, and provides greater computing ability for large scale data computing application than CPU. The learning algorithm of BP neural network has a high compute-intensive and rules, and be very suitable for the Stream Processor architecture. Using CUDA technology, the CUBLAS mathematical library and self-Kernels library, supported by NV Geforce GTX280 as hardware, modify the study algorithm ecome parallel, definite a parallel data structure, and describe the mapping mechanism for computing tasks on CUDA and the key algorithm. Compare the parallel study algorithm achieved on GTX280 with the serial algorithm on CPU in a simulation experiment. Improve the training time by as much as nearly 15 times.


Share and Cite

  • Share on Facebook
  • Share on X
  • Share on LinkedIn
ISRP Style

Yinfen Xie, Accelerate the Training Process of BP neural Network with CUDA Technology, Journal of Mathematics and Computer Science, 18 (2018), no. 1, 1--10

AMA Style

Xie Yinfen, Accelerate the Training Process of BP neural Network with CUDA Technology. J Math Comput SCI-JM. (2018); 18(1):1--10

Chicago/Turabian Style

Xie, Yinfen. "Accelerate the Training Process of BP neural Network with CUDA Technology." Journal of Mathematics and Computer Science, 18, no. 1 (2018): 1--10


Keywords


References