A Gesture-based Multimodal Interface for Human-Robot Interaction

Mikael Uimonen*, Paul Kemppi, Taru Hakanen

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

Abstract

Surface electromyography (sEMG) has been proposed as one of the possible input modalities for gesture or proportional control-based human-robot interaction to relieve the operator from hand-held controllers. However, when it comes to mobile robotics, the applications have been limited, often providing only direct control over the velocity of the robot. In this work, we propose a multimodal interface for controlling mobile robots in collaborative settings. The robot navigates using a combination of its internal sensors, a LiDAR, and spatial input from a person-detecting neural network. The operator of the robot is identified from the view of the robot's camera as the person wearing an sEMG armband, although the main use of the armband is for detecting hand gestures for commanding the robot. While the navigation is dependent on the line of sight between the robot and the operator, using the armband for gesture detection allows the robot to be commanded regardless of occlusions or changes in lighting conditions. Gesture-detection neural networks are first trained and tested offline with multiple subjects. Then, the complete interface is evaluated as a proof of concept with an expert user performing a sequence of tasks in cooperation with a quadruped mobile robot (Boston Dynamics). We demonstrate the usability of the interface in a realistic environment and show great long-term online performance of the gesture detection model with an average F-score of 0.94.

Original languageEnglish
Title of host publication2023 32nd IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2023
PublisherIEEE Institute of Electrical and Electronic Engineers
Pages165-170
Number of pages6
ISBN (Electronic)9798350336702
DOIs
Publication statusPublished - 2023
MoE publication typeA4 Article in a conference publication
Event32nd IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2023 - Busan, Korea, Republic of
Duration: 28 Aug 202331 Aug 2023

Conference

Conference32nd IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2023
Country/TerritoryKorea, Republic of
CityBusan
Period28/08/2331/08/23

Fingerprint

Dive into the research topics of 'A Gesture-based Multimodal Interface for Human-Robot Interaction'. Together they form a unique fingerprint.

Cite this