Fusion from Multimodal Gait Spatiotemporal Data for Human Gait Speed Classifications

Research output: Contribution to conferencePaperpeer-review


Human gait pattens remain largely undefined when relying on a single sensing modality. We report a pilot implementation of sensor fusion to classify gait spatiotemporal signals, from a publicly available dataset of 50 participants, harvested from four different type of sensors. For fusion we propose a hybrid Convolutional Neural Network and Long Short- Term Memory (hybrid CNN+LSTM) and Multi-stream CNN. The classification results are compared to single modality data using Single-stream CNN, a state-of-the-art Vision Transformer, and statistical classifiers algorithms. The fusion models outperformed the single modality methods and classified gait speed of previously unseen 10 random subjects with 97% F1-score prediction accuracy of the four gait speed classes.

Bibliographical metadata

Original languageEnglish
Number of pages4
Publication statusPublished - 31 Oct 2021
EventIEEE SENSORS 2021 - Virtual conference, Sydney, Australia
Event duration: 31 Oct 20214 Nov 2021


ConferenceIEEE SENSORS 2021
Internet address