Real-time event-driven spiking neural network object recognition on the SpiNNaker platformCitation formats
Standard
Real-time event-driven spiking neural network object recognition on the SpiNNaker platform. / Orchard, Garrick; Lagorce, Xavier; Posch, Christoph; Furber, Stephen; Benosman, Ryad; Galluppi, Francesco.
Proceedings - IEEE International Symposium on Circuits and Systems. Vol. 2015-July IEEE, 2015. p. 2413-2416 7169171.Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - Real-time event-driven spiking neural network object recognition on the SpiNNaker platform
AU - Orchard, Garrick
AU - Lagorce, Xavier
AU - Posch, Christoph
AU - Furber, Stephen
AU - Benosman, Ryad
AU - Galluppi, Francesco
PY - 2015/7/27
Y1 - 2015/7/27
N2 - This paper presents a real-time spiking neural network adaptation of the HMAX object recognition model on an event-driven platform. Visual input is provided by a spiking silicon retina, while the SpiNNaker system is used as a computational hardware platform for implementation. We show the implementation of a simple Leaky Integrate-and-Fire (LIF) neuron model on SpiNNaker to create an event driven network, where a neuron only updates when it receives an interrupt indicating that a new input spike has been received. The model output consists of view tuned neurons which respond selectively to a particular view of an object. The network can be used to discriminate between objects, or between the same object at different views. On a 26 class character recognition task, the correct class is always assigned the highest probability (69.42% on average).
AB - This paper presents a real-time spiking neural network adaptation of the HMAX object recognition model on an event-driven platform. Visual input is provided by a spiking silicon retina, while the SpiNNaker system is used as a computational hardware platform for implementation. We show the implementation of a simple Leaky Integrate-and-Fire (LIF) neuron model on SpiNNaker to create an event driven network, where a neuron only updates when it receives an interrupt indicating that a new input spike has been received. The model output consists of view tuned neurons which respond selectively to a particular view of an object. The network can be used to discriminate between objects, or between the same object at different views. On a 26 class character recognition task, the correct class is always assigned the highest probability (69.42% on average).
UR - http://www.scopus.com/inward/record.url?scp=84946220655&partnerID=8YFLogxK
U2 - 10.1109/ISCAS.2015.7169171
DO - 10.1109/ISCAS.2015.7169171
M3 - Conference contribution
AN - SCOPUS:84946220655
SN - 9781479983919
VL - 2015-July
SP - 2413
EP - 2416
BT - Proceedings - IEEE International Symposium on Circuits and Systems
PB - IEEE
T2 - IEEE International Symposium on Circuits and Systems, ISCAS 2015
Y2 - 24 May 2015 through 27 May 2015
ER -