Scalable Energy-Efficient, Low-Latency Implementations of Spiking Deep Belief Networks on SpiNNakerCitation formats

  • External authors:
  • Evangelos Stromatias
  • Daniel Neil
  • Francesco Galluppi
  • Michael Pfeiffer
  • Shih-Chii Liu

Standard

Scalable Energy-Efficient, Low-Latency Implementations of Spiking Deep Belief Networks on SpiNNaker. / Stromatias, Evangelos; Neil, Daniel; Galluppi, Francesco; Pfeiffer, Michael; Liu, Shih-Chii; Furber, Steve.

host publication. IEEE, 2015.

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Harvard

Stromatias, E, Neil, D, Galluppi, F, Pfeiffer, M, Liu, S-C & Furber, S 2015, Scalable Energy-Efficient, Low-Latency Implementations of Spiking Deep Belief Networks on SpiNNaker. in host publication. IEEE, 2015 International Joint Conference on Neural Networks, Killarney Convention Centre in Killarney, Ireland, 12/07/15. https://doi.org/10.1109/IJCNN.2015.7280625

APA

Vancouver

Author

Stromatias, Evangelos ; Neil, Daniel ; Galluppi, Francesco ; Pfeiffer, Michael ; Liu, Shih-Chii ; Furber, Steve. / Scalable Energy-Efficient, Low-Latency Implementations of Spiking Deep Belief Networks on SpiNNaker. host publication. IEEE, 2015.

Bibtex

@inproceedings{00677202ad824db388fd1a80f7c049e0,
title = "Scalable Energy-Efficient, Low-Latency Implementations of Spiking Deep Belief Networks on SpiNNaker",
abstract = "Deep neural networks have become the state-of-the-art approach for classification in machine learning, and Deep Belief Networks (DBNs) are one of its most successful representatives. DBNs consist of many neuron-like units, which are connected only to neurons in neighboring layers. Larger DBNs have been shown to perform better, but scaling-up poses problems for conventional CPUs, which calls for efficient implementations on parallel computing architectures, in particular reducing the communication overhead. In this context we introduce a realization of a spike-based variation of previously trained DBNs on the biologically-inspired parallel SpiNNaker platform. The DBN on SpiNNaker runs in real-time and achieves a classification performance of 95% on the MNIST handwritten digit dataset, which is only 0.06% less than that of a pure software implementation. Importantly, using a neurally-inspired architecture yields additional benefits: during network run-time on this task, the platform consumes only 0.3 W with classification latencies in the order of tens of milliseconds, making it suitable for implementing such networks on a mobile platform. The results in this paper also show how the power dissipation of the SpiNNaker platform and the classification latency of a network scales with the number of neurons and layers in the network and the overall spike activity rate.",
author = "Evangelos Stromatias and Daniel Neil and Francesco Galluppi and Michael Pfeiffer and Shih-Chii Liu and Steve Furber",
year = "2015",
doi = "10.1109/IJCNN.2015.7280625",
language = "English",
booktitle = "host publication",
publisher = "IEEE",
address = "United States",
note = "2015 International Joint Conference on Neural Networks ; Conference date: 12-07-2015 Through 16-07-2015",

}

RIS

TY - GEN

T1 - Scalable Energy-Efficient, Low-Latency Implementations of Spiking Deep Belief Networks on SpiNNaker

AU - Stromatias, Evangelos

AU - Neil, Daniel

AU - Galluppi, Francesco

AU - Pfeiffer, Michael

AU - Liu, Shih-Chii

AU - Furber, Steve

PY - 2015

Y1 - 2015

N2 - Deep neural networks have become the state-of-the-art approach for classification in machine learning, and Deep Belief Networks (DBNs) are one of its most successful representatives. DBNs consist of many neuron-like units, which are connected only to neurons in neighboring layers. Larger DBNs have been shown to perform better, but scaling-up poses problems for conventional CPUs, which calls for efficient implementations on parallel computing architectures, in particular reducing the communication overhead. In this context we introduce a realization of a spike-based variation of previously trained DBNs on the biologically-inspired parallel SpiNNaker platform. The DBN on SpiNNaker runs in real-time and achieves a classification performance of 95% on the MNIST handwritten digit dataset, which is only 0.06% less than that of a pure software implementation. Importantly, using a neurally-inspired architecture yields additional benefits: during network run-time on this task, the platform consumes only 0.3 W with classification latencies in the order of tens of milliseconds, making it suitable for implementing such networks on a mobile platform. The results in this paper also show how the power dissipation of the SpiNNaker platform and the classification latency of a network scales with the number of neurons and layers in the network and the overall spike activity rate.

AB - Deep neural networks have become the state-of-the-art approach for classification in machine learning, and Deep Belief Networks (DBNs) are one of its most successful representatives. DBNs consist of many neuron-like units, which are connected only to neurons in neighboring layers. Larger DBNs have been shown to perform better, but scaling-up poses problems for conventional CPUs, which calls for efficient implementations on parallel computing architectures, in particular reducing the communication overhead. In this context we introduce a realization of a spike-based variation of previously trained DBNs on the biologically-inspired parallel SpiNNaker platform. The DBN on SpiNNaker runs in real-time and achieves a classification performance of 95% on the MNIST handwritten digit dataset, which is only 0.06% less than that of a pure software implementation. Importantly, using a neurally-inspired architecture yields additional benefits: during network run-time on this task, the platform consumes only 0.3 W with classification latencies in the order of tens of milliseconds, making it suitable for implementing such networks on a mobile platform. The results in this paper also show how the power dissipation of the SpiNNaker platform and the classification latency of a network scales with the number of neurons and layers in the network and the overall spike activity rate.

U2 - 10.1109/IJCNN.2015.7280625

DO - 10.1109/IJCNN.2015.7280625

M3 - Conference contribution

BT - host publication

PB - IEEE

T2 - 2015 International Joint Conference on Neural Networks

Y2 - 12 July 2015 through 16 July 2015

ER -