Abstract
We demonstrate improved performance in the classification of bioelectric data for use in systems such as robotic prosthesis control, by data fusion using low-cost electromyography (EMG) and electroencephalography (EEG) devices. Prosthetic limbs are typically controlled through EMG, and whilst there is a wealth of research into the use of EEG as part of a brain-computer interface (BCI) the cost of EEG equipment commonly prevents this approach from being adopted outside the lab. This study demonstrates as a proof-of-concept that multimodal classification can be achieved by using low-cost EMG and EEG devices in tandem, with statistical decision-level fusion, to a high degree of accuracy. We present multiple fusion methods, including those based on Jensen-Shannon divergence which had not previously been applied to this problem. We report accuracies of up to 99% when merging both signal modalities, improving on the best-case single-mode classification. We hence demonstrate the strengths of combining EMG and EEG in a multimodal classification system that could in future be leveraged as an alternative control mechanism for robotic prostheses.
Original language | English |
---|---|
Article number | 012056 |
Journal | Journal of Physics: Conference Series |
Volume | 1828 |
Issue number | 1 |
DOIs | |
Publication status | Published - 4 Mar 2021 |
Bibliographical note
Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.Funding: This work is partially supported by EPSRC-UK InDex project (EU CHIST-ERA programme), with
reference EP/S032355/1 and by the Royal Society (UK) through the project “Sim2Real” with grant
number RGS\R2\192498.