This work explores a programmable accelerator for computing exponential and natural logarithm, functions that are common in Spiking Neural Network (SNN) models, in the context of SpiNNaker neuromorphic chip. The accelerator is integrated in the SpiNNaker2 chip, and the energy, area, and numerical accuracy tradeoffs are evaluated. An early version of the accelerator with fixed-point arithmetic was included in a prototype SpiNNaker2 chip and tested in silicon, while the final version with floating point is, at the time of writing, in manufacturing as part of another SpiNNaker2 prototype chip. Software techniques for improving the accuracy of the exponential function included in the very first SpiNNaker2 prototype chips are also presented. Furthermore, a problem of simulation results being different on SpiNNaker from those obtained using floating-point arithmetic is explored. Numerical accuracy of Ordinary Differential Equation (ODE) solvers for the Izhikevich neuron model, which was previously shown to be a major challenge in fixed-point arithmetic, is addressed. Any simulation of a physical system has multiple sources of errors, which include errors in measurements, models, numerical methods, and finite-precision computer arithmetic. Here the last source of error is addressed; it is shown, using the Izhikevich neuron on SpiNNaker, that various problems with fixed-point arithmetic caused arithmetic error to be substantially larger than expected with 32-bit data types. Improvements are found by utilizing rounding of the constants (on decimal to fixed-point format conversion), rounding of the multiplier results, and usage of mixed-precision operations. The stochastic rounding method, which rounds with the probability proportional to the distance between numbers, is shown experimentally to improve the accuracy of a series of ODE solvers beyond the standard rounding routines. As a result, a hardware accelerator for the SpiNNaker2 chip is explored to speed up this rounding method.