Visual attention and object naming in humanoid robots using a bio-inspired spiking neural networkCitation formats

  • External authors:
  • Daniel Hernández García
  • Samantha Adams
  • Alex Rast
  • Thomas Wennekers

Standard

Visual attention and object naming in humanoid robots using a bio-inspired spiking neural network. / Hernández García, Daniel; Adams, Samantha; Rast, Alex; Wennekers, Thomas; Furber, Steve; Cangelosi, Angelo.

In: Robotics and Autonomous Systems, Vol. 104, 01.06.2018, p. 56-71.

Research output: Contribution to journalArticlepeer-review

Harvard

Hernández García, D, Adams, S, Rast, A, Wennekers, T, Furber, S & Cangelosi, A 2018, 'Visual attention and object naming in humanoid robots using a bio-inspired spiking neural network', Robotics and Autonomous Systems, vol. 104, pp. 56-71. https://doi.org/10.1016/j.robot.2018.02.010

APA

Hernández García, D., Adams, S., Rast, A., Wennekers, T., Furber, S., & Cangelosi, A. (2018). Visual attention and object naming in humanoid robots using a bio-inspired spiking neural network. Robotics and Autonomous Systems, 104, 56-71. https://doi.org/10.1016/j.robot.2018.02.010

Vancouver

Hernández García D, Adams S, Rast A, Wennekers T, Furber S, Cangelosi A. Visual attention and object naming in humanoid robots using a bio-inspired spiking neural network. Robotics and Autonomous Systems. 2018 Jun 1;104:56-71. https://doi.org/10.1016/j.robot.2018.02.010

Author

Hernández García, Daniel ; Adams, Samantha ; Rast, Alex ; Wennekers, Thomas ; Furber, Steve ; Cangelosi, Angelo. / Visual attention and object naming in humanoid robots using a bio-inspired spiking neural network. In: Robotics and Autonomous Systems. 2018 ; Vol. 104. pp. 56-71.

Bibtex

@article{fd04bc20c3304347bb6ef6809b8a5c73,
title = "Visual attention and object naming in humanoid robots using a bio-inspired spiking neural network",
abstract = "Recent advances in behavioural and computational neuroscience, cognitive robotics, and in the hardware implementation of large-scale neural networks, provide the opportunity for an accelerated understanding of brain functions and for the design of interactive robotic systems based on brain-inspired control systems. This is especially the case in the domain of action and language learning, given the significant scientific and technological developments in this field. In this work we describe how a neuroanatomically grounded spiking neural network for visual attention has been extended with a word learning capability and integrated with the iCub humanoid robot to demonstrate attention-led object naming. Experiments were carried out with both a simulated and a real iCub robot platform with successful results. The iCub robot is capable of associating a label to an object with a {\textquoteleft}preferred{\textquoteright} orientation when visual and word stimuli are presented concurrently in the scene, as well as attending to said object, thus naming it. After learning is complete, the name of the object can be recalled successfully when only the visual input is present, even when the object has been moved from its original position or when other objects are present as distractors.",
keywords = "Spiking neural networks",
author = "{Hern{\'a}ndez Garc{\'i}a}, Daniel and Samantha Adams and Alex Rast and Thomas Wennekers and Steve Furber and Angelo Cangelosi",
year = "2018",
month = jun,
day = "1",
doi = "10.1016/j.robot.2018.02.010",
language = "English",
volume = "104",
pages = "56--71",
journal = "Robotics and Autonomous Systems",
issn = "0921-8890",
publisher = "Elsevier BV",

}

RIS

TY - JOUR

T1 - Visual attention and object naming in humanoid robots using a bio-inspired spiking neural network

AU - Hernández García, Daniel

AU - Adams, Samantha

AU - Rast, Alex

AU - Wennekers, Thomas

AU - Furber, Steve

AU - Cangelosi, Angelo

PY - 2018/6/1

Y1 - 2018/6/1

N2 - Recent advances in behavioural and computational neuroscience, cognitive robotics, and in the hardware implementation of large-scale neural networks, provide the opportunity for an accelerated understanding of brain functions and for the design of interactive robotic systems based on brain-inspired control systems. This is especially the case in the domain of action and language learning, given the significant scientific and technological developments in this field. In this work we describe how a neuroanatomically grounded spiking neural network for visual attention has been extended with a word learning capability and integrated with the iCub humanoid robot to demonstrate attention-led object naming. Experiments were carried out with both a simulated and a real iCub robot platform with successful results. The iCub robot is capable of associating a label to an object with a ‘preferred’ orientation when visual and word stimuli are presented concurrently in the scene, as well as attending to said object, thus naming it. After learning is complete, the name of the object can be recalled successfully when only the visual input is present, even when the object has been moved from its original position or when other objects are present as distractors.

AB - Recent advances in behavioural and computational neuroscience, cognitive robotics, and in the hardware implementation of large-scale neural networks, provide the opportunity for an accelerated understanding of brain functions and for the design of interactive robotic systems based on brain-inspired control systems. This is especially the case in the domain of action and language learning, given the significant scientific and technological developments in this field. In this work we describe how a neuroanatomically grounded spiking neural network for visual attention has been extended with a word learning capability and integrated with the iCub humanoid robot to demonstrate attention-led object naming. Experiments were carried out with both a simulated and a real iCub robot platform with successful results. The iCub robot is capable of associating a label to an object with a ‘preferred’ orientation when visual and word stimuli are presented concurrently in the scene, as well as attending to said object, thus naming it. After learning is complete, the name of the object can be recalled successfully when only the visual input is present, even when the object has been moved from its original position or when other objects are present as distractors.

KW - Spiking neural networks

U2 - 10.1016/j.robot.2018.02.010

DO - 10.1016/j.robot.2018.02.010

M3 - Article

VL - 104

SP - 56

EP - 71

JO - Robotics and Autonomous Systems

JF - Robotics and Autonomous Systems

SN - 0921-8890

ER -