Exploring Deep Models for Comprehension of Deictic Gesture-Word Combinations in Cognitive Robotics

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In the early stages of infant development, gestures and speech are integrated during language acquisition. Such a natural combination is therefore a desirable, yet challenging, goal for fluid human-robot interaction. To achieve this, we propose a multimodal deep learning architecture, for comprehension of complementary gesture-word combinations, implemented on an iCub humanoid robot. This enables human-assisted language learning, with interactions like pointing at a cup and labelling it with a vocal utterance. We evaluate various depths of the Mask Regional Convolutional Neural Network (for object and wrist detection) and the Residual Network (for gesture classification). Validation is carried out with two deictic gestures across ten real-world objects on frames recorded directly from the iCub’s cameras. Results further strengthen the potential of gesture-word combinations for robot language acquisition.

Bibliographical metadata

Original languageEnglish
Title of host publicationInternational Joint Conference on Neural Networks
Publication statusAccepted/In press - 7 Mar 2019