Large-scale simulations of plastic neural networks on neuromorphic hardwareCitation formats

  • Authors:
  • James Courtney Knight
  • Philip Joseph Tully
  • Bernhard A Kaplan
  • Anders Lansner
  • Stephen Furber

Standard

Large-scale simulations of plastic neural networks on neuromorphic hardware. / Knight, James Courtney; Tully, Philip Joseph; Kaplan, Bernhard A; Lansner, Anders; Furber, Stephen.

In: Frontiers in Neuroanatomy, Vol. 10, No. 37, 37, 07.04.2016.

Research output: Contribution to journalArticlepeer-review

Harvard

Knight, JC, Tully, PJ, Kaplan, BA, Lansner, A & Furber, S 2016, 'Large-scale simulations of plastic neural networks on neuromorphic hardware', Frontiers in Neuroanatomy, vol. 10, no. 37, 37. https://doi.org/10.3389/fnana.2016.00037

APA

Knight, J. C., Tully, P. J., Kaplan, B. A., Lansner, A., & Furber, S. (2016). Large-scale simulations of plastic neural networks on neuromorphic hardware. Frontiers in Neuroanatomy, 10(37), [37]. https://doi.org/10.3389/fnana.2016.00037

Vancouver

Knight JC, Tully PJ, Kaplan BA, Lansner A, Furber S. Large-scale simulations of plastic neural networks on neuromorphic hardware. Frontiers in Neuroanatomy. 2016 Apr 7;10(37). 37. https://doi.org/10.3389/fnana.2016.00037

Author

Knight, James Courtney ; Tully, Philip Joseph ; Kaplan, Bernhard A ; Lansner, Anders ; Furber, Stephen. / Large-scale simulations of plastic neural networks on neuromorphic hardware. In: Frontiers in Neuroanatomy. 2016 ; Vol. 10, No. 37.

Bibtex

@article{fed200edcd0f43ae8a1be76f6dff929f,
title = "Large-scale simulations of plastic neural networks on neuromorphic hardware",
abstract = "SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 x 104 neurons and 5.1 x 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45x more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models.",
keywords = "SpiNNaker, learning, plasticity, digital neuromorphic hardware, Bayesian confidence propagation neural network (BCPNN), event-driven simulation, fixed-point accuracy",
author = "Knight, {James Courtney} and Tully, {Philip Joseph} and Kaplan, {Bernhard A} and Anders Lansner and Stephen Furber",
note = "The design and construction of SpiNNaker was funded by EPSRC (the UK Engineering and Physical Sciences Research Council) under grants EP/G015740/1 and EP/G015775/1. The research was supported by the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement 320689 and also by the EU Flagship Human Brain Project (FP7-604102) and the EU BrainScales project (EU-FP7-FET-269921). JK is supported by a Kilburn Studentship from the School of Computer Science at The University of Manchester.",
year = "2016",
month = apr,
day = "7",
doi = "10.3389/fnana.2016.00037",
language = "English",
volume = "10",
journal = "Frontiers in Neuroanatomy",
issn = "1662-5129",
publisher = "Frontiers Media S. A.",
number = "37",

}

RIS

TY - JOUR

T1 - Large-scale simulations of plastic neural networks on neuromorphic hardware

AU - Knight, James Courtney

AU - Tully, Philip Joseph

AU - Kaplan, Bernhard A

AU - Lansner, Anders

AU - Furber, Stephen

N1 - The design and construction of SpiNNaker was funded by EPSRC (the UK Engineering and Physical Sciences Research Council) under grants EP/G015740/1 and EP/G015775/1. The research was supported by the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement 320689 and also by the EU Flagship Human Brain Project (FP7-604102) and the EU BrainScales project (EU-FP7-FET-269921). JK is supported by a Kilburn Studentship from the School of Computer Science at The University of Manchester.

PY - 2016/4/7

Y1 - 2016/4/7

N2 - SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 x 104 neurons and 5.1 x 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45x more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models.

AB - SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 x 104 neurons and 5.1 x 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45x more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models.

KW - SpiNNaker, learning, plasticity, digital neuromorphic hardware, Bayesian confidence propagation neural network (BCPNN), event-driven simulation, fixed-point accuracy

U2 - 10.3389/fnana.2016.00037

DO - 10.3389/fnana.2016.00037

M3 - Article

VL - 10

JO - Frontiers in Neuroanatomy

JF - Frontiers in Neuroanatomy

SN - 1662-5129

IS - 37

M1 - 37

ER -