Scalable communications for a million-core neural processing architectureCitation formats

  • External authors:
  • Cameron Patterson
  • Eustace Painkras
  • Steve Temple
  • Javier Navaridas
  • Thomas Sharp

Standard

Scalable communications for a million-core neural processing architecture. / Patterson, Cameron; Garside, Jim; Painkras, Eustace; Temple, Steve; Plana, Luis A.; Navaridas, Javier; Sharp, Thomas; Furber, Steve.

In: Journal of Parallel and Distributed Computing, Vol. 72, No. 11, 11.2012, p. 1507-1520.

Research output: Contribution to journalArticlepeer-review

Harvard

Patterson, C, Garside, J, Painkras, E, Temple, S, Plana, LA, Navaridas, J, Sharp, T & Furber, S 2012, 'Scalable communications for a million-core neural processing architecture', Journal of Parallel and Distributed Computing, vol. 72, no. 11, pp. 1507-1520. https://doi.org/10.1016/j.jpdc.2012.01.016

APA

Patterson, C., Garside, J., Painkras, E., Temple, S., Plana, L. A., Navaridas, J., Sharp, T., & Furber, S. (2012). Scalable communications for a million-core neural processing architecture. Journal of Parallel and Distributed Computing, 72(11), 1507-1520. https://doi.org/10.1016/j.jpdc.2012.01.016

Vancouver

Patterson C, Garside J, Painkras E, Temple S, Plana LA, Navaridas J et al. Scalable communications for a million-core neural processing architecture. Journal of Parallel and Distributed Computing. 2012 Nov;72(11):1507-1520. https://doi.org/10.1016/j.jpdc.2012.01.016

Author

Patterson, Cameron ; Garside, Jim ; Painkras, Eustace ; Temple, Steve ; Plana, Luis A. ; Navaridas, Javier ; Sharp, Thomas ; Furber, Steve. / Scalable communications for a million-core neural processing architecture. In: Journal of Parallel and Distributed Computing. 2012 ; Vol. 72, No. 11. pp. 1507-1520.

Bibtex

@article{eee67340b8d9480aa299cbed690bcf2c,
title = "Scalable communications for a million-core neural processing architecture",
abstract = "The design of a new high-performance computing platform to model biological neural networks requires scalable, layered communications in both hardware and software. SpiNNaker's hardware is based upon Multi-Processor System-on-Chips (MPSoCs) with flexible, power-efficient, custom communication between processors and chips. The architecture scales from a single 18-processor chip to over 1 million processors and to simulations of billion-neuron, trillion-synapse models, with tens of trillions of neural spike-event packets conveyed each second. The communication networks and overlying protocols are key to the successful operation of the SpiNNaker architecture, designed together to maximise performance and minimise the power demands of the platform. SpiNNaker is a work in progress, having recently reached a major milestone with the delivery of the first MPSoCs. This paper presents the architectural justification, which is now supported by preliminary measured results of silicon performance, indicating that it is indeed scalable to a million-plus processor system. {\textcopyright} 2012 Elsevier Inc. All rights reserved.",
keywords = "GALS, HPC, Low-power, Network-on-Chip, Neuromorphic, Parallel architecture",
author = "Cameron Patterson and Jim Garside and Eustace Painkras and Steve Temple and Plana, {Luis A.} and Javier Navaridas and Thomas Sharp and Steve Furber",
year = "2012",
month = nov,
doi = "10.1016/j.jpdc.2012.01.016",
language = "English",
volume = "72",
pages = "1507--1520",
journal = "Journal of Parallel and Distributed Computing",
issn = "0743-7315",
publisher = "Elsevier BV",
number = "11",

}

RIS

TY - JOUR

T1 - Scalable communications for a million-core neural processing architecture

AU - Patterson, Cameron

AU - Garside, Jim

AU - Painkras, Eustace

AU - Temple, Steve

AU - Plana, Luis A.

AU - Navaridas, Javier

AU - Sharp, Thomas

AU - Furber, Steve

PY - 2012/11

Y1 - 2012/11

N2 - The design of a new high-performance computing platform to model biological neural networks requires scalable, layered communications in both hardware and software. SpiNNaker's hardware is based upon Multi-Processor System-on-Chips (MPSoCs) with flexible, power-efficient, custom communication between processors and chips. The architecture scales from a single 18-processor chip to over 1 million processors and to simulations of billion-neuron, trillion-synapse models, with tens of trillions of neural spike-event packets conveyed each second. The communication networks and overlying protocols are key to the successful operation of the SpiNNaker architecture, designed together to maximise performance and minimise the power demands of the platform. SpiNNaker is a work in progress, having recently reached a major milestone with the delivery of the first MPSoCs. This paper presents the architectural justification, which is now supported by preliminary measured results of silicon performance, indicating that it is indeed scalable to a million-plus processor system. © 2012 Elsevier Inc. All rights reserved.

AB - The design of a new high-performance computing platform to model biological neural networks requires scalable, layered communications in both hardware and software. SpiNNaker's hardware is based upon Multi-Processor System-on-Chips (MPSoCs) with flexible, power-efficient, custom communication between processors and chips. The architecture scales from a single 18-processor chip to over 1 million processors and to simulations of billion-neuron, trillion-synapse models, with tens of trillions of neural spike-event packets conveyed each second. The communication networks and overlying protocols are key to the successful operation of the SpiNNaker architecture, designed together to maximise performance and minimise the power demands of the platform. SpiNNaker is a work in progress, having recently reached a major milestone with the delivery of the first MPSoCs. This paper presents the architectural justification, which is now supported by preliminary measured results of silicon performance, indicating that it is indeed scalable to a million-plus processor system. © 2012 Elsevier Inc. All rights reserved.

KW - GALS

KW - HPC

KW - Low-power

KW - Network-on-Chip

KW - Neuromorphic

KW - Parallel architecture

U2 - 10.1016/j.jpdc.2012.01.016

DO - 10.1016/j.jpdc.2012.01.016

M3 - Article

VL - 72

SP - 1507

EP - 1520

JO - Journal of Parallel and Distributed Computing

JF - Journal of Parallel and Distributed Computing

SN - 0743-7315

IS - 11

ER -