PDF

Keywords

Keyword
floating
point
FPGA
VHDL

Abstract

In this paper, we suggest a method for designing and implementing of multilayer Perceptron (MLP) neural network based on backpropagation (PB) learning algorithm. The method is described using very high speed integrated circuit hardware description language (VHDL), that used in developing the designs of a very large scale integration (VLSI). Firstly artificial neuron with sigmoid activation function has been designed and implemented which is considered as a basic unit of MLP. The MLP network is trained by BP algorithms, in the Matlab environment in order to obtain the ideal parameters of the network. Then hardware implementation of MLP on FPGAs, of types Spartan 3E and Virtex4 is achieved by using integer format and floating point format respectively . A comparison is done between the two arithmetic formats of MLP implementations on FPGAs. Keyword: MLP neural networks , floating-point (FLP) arithmetic, FPGA, VHDL.
https://doi.org/10.33899/rengj.2009.38557
  PDF