This thesis is a study of specialized circuits and systems targeted towards machine learning algorithms. These systems operate on a computing paradigm that is different from traditional Von-Neumann architectures and can potentially reduce power consumption and improve performance over traditional computers when running specialized tasks. In order to study them, case studies covering implementations such as TrueNorth, SpiNNaker, Neurogrid, Pulse-stream based neural networks, and memristor-based systems, were done. The use of memristive crossbar arrays for machine learning was found particularly interesting and chosen as the primary focus of this work.This thesis presents an Unregulated Step Descent (USD) algorithm that can be used for training memristive crossbar arrays to run algorithms based on gradient-descent learning. It describes how the USD algorithm can address hardware limitations such as variability, poor device models, complexity of training architectures, etc. The linear classifier algorithm was primarily used in the experiments designed to study these features. This algorithm was chosen because its crossbar architecture can easily be extended to larger networks. More importantly, using a simple algorithm makes it easier to draw inferences from experimental results. Datasets used for these experiments included randomly generated data and the MNIST digits dataset. The results indicate that performance of crossbar arrays that have been trained using the USD algorithm is reasonably close to that of the corresponding floating point implementation. These experimental observations also provide a blueprint of how training and device parameters affect the performance of a crossbar array and how it might be improved. The thesis also covers how other machine learning algorithms such as logistic regressions, multi-layer perceptrons, and restricted Boltzmann machines may be implemented on crossbar arrays using the USD algorithm.