Managing burstiness and scalability in event-driven models on the spinnaker neuromimetic system

Research output: Contribution to journalArticlepeer-review

  • External authors:
  • Alexander Rast
  • Javier Navaridas Palma
  • X Jin
  • F Galluppi
  • J Miguel-Alonso
  • C Patterson
  • Mikel Lujan

Abstract

Neural networks present a fundamentally different model of computation from the conventional sequential digital model, for which conventional hardware is typically poorly matched. However, a combination of model and scalability limitations has meant that neither dedicated neural chips nor FPGA's have offered an entirely satisfactory solution. SpiNNaker introduces a different approach, the "neuromimetic" architecture, that maintains the neural optimisation of dedicated chips while offering FPGA-like universal configurability. This parallel multiprocessor employs an asynchronous event-driven model that uses interrupt-generating dedicated hardware on the chip to support real-time neural simulation. Nonetheless, event handling, particularly packet servicing, requires careful and innovative design in order to avoid local processor congestion and possible deadlock. We explore the impact that spatial locality, temporal causality and burstiness of traffic have on network performance, using tunable, biologically similar synthetic traffic patterns. Having established the viability of the system for real-time operation, we use two exemplar neural models to illustrate how to implement efficient event-handling service routines that mitigate the problem of burstiness in the traffic. Extending work published in ACM Computing Frontiers 2010 with on-chip testing, simulation results indicate the viability of SpiNNaker for large-scale neural modelling, while emphasizing the need for effective burst management and network mapping. Ultimately, the goal is the creation of a library-based development system that can translate a high-level neural model from any description environment into an efficient SpiNNaker instantiation. The complete system represents a general-purpose platform that can generate an arbitrary neural network and run it with hardware speed and scale. © The Author(s) 2011.

Bibliographical metadata

Original languageEnglish
Pages (from-to)553-582
Number of pages30
JournalInternational Journal of Parallel Programming
Volume40
Issue number6
DOIs
Publication statusPublished - Dec 2012