Training of Artificial Neural Networks (ANNs) for large data sets is a time consuming mission. In this paper, accelerating the training of artificial neural networkis achievedby a parallel training using either Multicore Central Processing Unit(CPU) orGeneral Purpose Graphics Processing Unit (GPGPU). The trainingis implemented using five datasets with diverse amounts of patterns and with different neural network parameters in Multilayer Perceptron (MLP). The results show a significant increase in computation speed, which is increasednearly linear with the number of cores in multicore processor for problems with medium and large training datasets.Also, a considerable speed up is achieved when the GPU is used to train the MLP withthe large training datasets. While a single core processor is a better choice when the data set size is small.The optimal number of cores or the type of the parallel platform should be employed according to the load of computation.