While new neural hardware is increasingly emphasizing spiking neural models, there will still be a need to model classical neural networks like the multilayer perceptron (MLP) for the foreseeable future. Given that the trend in new chips is towards a "neuromimetic" design that specialises the hardware for neural networks but does not hardwire the model, it is worth examining whether it is possible to implement the MLP on such hardware and realise performance gains over conventional simulation on general- purpose computers. Using the SpiNNaker chip as a demonstration platform, we show that it is possible to find efficient mappings to improve both the performance and the scalability of the MLP network, allowing for much larger models than possible in software. These mappings, however require a careful consideration of how to transform the "timeless" MLP model into an event-driven implementation. Examination of the hardware performance also reveals the importance of distributing processing and traffic load so that local congestion does not end up crippling the simulation. These considerations being solved, not only does the hardware demonstrate the potential for significant performance improvement, it also illustrates important general techniques and methods for translating nonspiking models onto the emerging generation of spiking neural hardware. The results suggest both the form that models might take and the architectures that future neural hardware could adopt for optimum generality and performance. © 2012 IEEE.