Decoding covert speech for intuitive control of brain-computer interfaces based on single-trial EEG: a feasibility study

Lea Tøttrup, Johannes Thorling Hadsund, Ernest Nlandu Kamavuako, Rasmus Leck Kæseler, Mads Jochumsen

Research output: Contribution to book/anthology/report/conference proceedingArticle in proceedingResearchpeer-review

Abstract

For individuals with severe motor deficiencies, controlling external devices such as robotic arms or wheelchairs can be challenging, as many devices require some degree of motor control to be operated, e.g. when controlled using a joystick. A brain-computer interface (BCI) relies only on signals from the brain and may be used as a controller instead of muscles. Motor imagery (MI) has been used in many studies as a control signal for BCIs. However, MI may not be suitable for all control purposes, and several people cannot obtain BCI control with MI. In this study, the aim was to investigate the feasibility of decoding covert speech from single-trial EEG and compare and combine it with MI. In seven healthy subjects, EEG was recorded with twenty-five channels during six different actions: Speaking three words (both covert and overt speech), two arm movements (both motor imagery and execution), and one idle class. Temporal and spectral features were derived from the epochs and classified with a random forest classifier. The average classification accuracy was 67 ± 9 % and 75 ± 7 % for covert and overt speech, respectively; this was 5-10 % lower than the movement classification. The performance of the combined movement-speech decoder was 61 ± 9 % and 67 ± 7 % (covert and overt), but it is possible to have more classes available for control. The possibility of using covert speech for controlling a BCI was outlined; this is a step towards a multimodal BCI system for improved usability.
Original languageEnglish
Title of host publicationInternational Conference on Rehabilitation Robotics (ICORR)
Number of pages5
Volume16
PublisherIEEE
Publication date2019
Publication statusAccepted/In press - 2019
EventInternational Conference on Rehabilitation Robotics 2019 (ICORR 2019) - Toronto, Canada
Duration: 24 Jun 201928 Jun 2019

Conference

ConferenceInternational Conference on Rehabilitation Robotics 2019 (ICORR 2019)
CountryCanada
CityToronto
Period24/06/201928/06/2019

Fingerprint

Brain-Computer Interfaces
Imagery (Psychotherapy)
Feasibility Studies
Electroencephalography
Equipment and Supplies
Wheelchairs
Computer Systems
Robotics
Healthy Volunteers
Muscles
Brain

Cite this

Tøttrup, L., Hadsund, J. T., Kamavuako, E. N., Kæseler, R. L., & Jochumsen, M. (Accepted/In press). Decoding covert speech for intuitive control of brain-computer interfaces based on single-trial EEG: a feasibility study. In International Conference on Rehabilitation Robotics (ICORR) (Vol. 16). IEEE.
Tøttrup, Lea ; Hadsund, Johannes Thorling ; Kamavuako, Ernest Nlandu ; Kæseler, Rasmus Leck ; Jochumsen, Mads. / Decoding covert speech for intuitive control of brain-computer interfaces based on single-trial EEG: a feasibility study. International Conference on Rehabilitation Robotics (ICORR) . Vol. 16 IEEE, 2019.
@inproceedings{e2d6c6df33bb4706aa4f6175afe8561d,
title = "Decoding covert speech for intuitive control of brain-computer interfaces based on single-trial EEG: a feasibility study",
abstract = "For individuals with severe motor deficiencies, controlling external devices such as robotic arms or wheelchairs can be challenging, as many devices require some degree of motor control to be operated, e.g. when controlled using a joystick. A brain-computer interface (BCI) relies only on signals from the brain and may be used as a controller instead of muscles. Motor imagery (MI) has been used in many studies as a control signal for BCIs. However, MI may not be suitable for all control purposes, and several people cannot obtain BCI control with MI. In this study, the aim was to investigate the feasibility of decoding covert speech from single-trial EEG and compare and combine it with MI. In seven healthy subjects, EEG was recorded with twenty-five channels during six different actions: Speaking three words (both covert and overt speech), two arm movements (both motor imagery and execution), and one idle class. Temporal and spectral features were derived from the epochs and classified with a random forest classifier. The average classification accuracy was 67 ± 9 {\%} and 75 ± 7 {\%} for covert and overt speech, respectively; this was 5-10 {\%} lower than the movement classification. The performance of the combined movement-speech decoder was 61 ± 9 {\%} and 67 ± 7 {\%} (covert and overt), but it is possible to have more classes available for control. The possibility of using covert speech for controlling a BCI was outlined; this is a step towards a multimodal BCI system for improved usability.",
author = "Lea T{\o}ttrup and Hadsund, {Johannes Thorling} and Kamavuako, {Ernest Nlandu} and K{\ae}seler, {Rasmus Leck} and Mads Jochumsen",
year = "2019",
language = "English",
volume = "16",
booktitle = "International Conference on Rehabilitation Robotics (ICORR)",
publisher = "IEEE",
address = "United States",

}

Tøttrup, L, Hadsund, JT, Kamavuako, EN, Kæseler, RL & Jochumsen, M 2019, Decoding covert speech for intuitive control of brain-computer interfaces based on single-trial EEG: a feasibility study. in International Conference on Rehabilitation Robotics (ICORR) . vol. 16, IEEE, Toronto, Canada, 24/06/2019.

Decoding covert speech for intuitive control of brain-computer interfaces based on single-trial EEG: a feasibility study. / Tøttrup, Lea; Hadsund, Johannes Thorling; Kamavuako, Ernest Nlandu; Kæseler, Rasmus Leck; Jochumsen, Mads.

International Conference on Rehabilitation Robotics (ICORR) . Vol. 16 IEEE, 2019.

Research output: Contribution to book/anthology/report/conference proceedingArticle in proceedingResearchpeer-review

TY - GEN

T1 - Decoding covert speech for intuitive control of brain-computer interfaces based on single-trial EEG: a feasibility study

AU - Tøttrup, Lea

AU - Hadsund, Johannes Thorling

AU - Kamavuako, Ernest Nlandu

AU - Kæseler, Rasmus Leck

AU - Jochumsen, Mads

PY - 2019

Y1 - 2019

N2 - For individuals with severe motor deficiencies, controlling external devices such as robotic arms or wheelchairs can be challenging, as many devices require some degree of motor control to be operated, e.g. when controlled using a joystick. A brain-computer interface (BCI) relies only on signals from the brain and may be used as a controller instead of muscles. Motor imagery (MI) has been used in many studies as a control signal for BCIs. However, MI may not be suitable for all control purposes, and several people cannot obtain BCI control with MI. In this study, the aim was to investigate the feasibility of decoding covert speech from single-trial EEG and compare and combine it with MI. In seven healthy subjects, EEG was recorded with twenty-five channels during six different actions: Speaking three words (both covert and overt speech), two arm movements (both motor imagery and execution), and one idle class. Temporal and spectral features were derived from the epochs and classified with a random forest classifier. The average classification accuracy was 67 ± 9 % and 75 ± 7 % for covert and overt speech, respectively; this was 5-10 % lower than the movement classification. The performance of the combined movement-speech decoder was 61 ± 9 % and 67 ± 7 % (covert and overt), but it is possible to have more classes available for control. The possibility of using covert speech for controlling a BCI was outlined; this is a step towards a multimodal BCI system for improved usability.

AB - For individuals with severe motor deficiencies, controlling external devices such as robotic arms or wheelchairs can be challenging, as many devices require some degree of motor control to be operated, e.g. when controlled using a joystick. A brain-computer interface (BCI) relies only on signals from the brain and may be used as a controller instead of muscles. Motor imagery (MI) has been used in many studies as a control signal for BCIs. However, MI may not be suitable for all control purposes, and several people cannot obtain BCI control with MI. In this study, the aim was to investigate the feasibility of decoding covert speech from single-trial EEG and compare and combine it with MI. In seven healthy subjects, EEG was recorded with twenty-five channels during six different actions: Speaking three words (both covert and overt speech), two arm movements (both motor imagery and execution), and one idle class. Temporal and spectral features were derived from the epochs and classified with a random forest classifier. The average classification accuracy was 67 ± 9 % and 75 ± 7 % for covert and overt speech, respectively; this was 5-10 % lower than the movement classification. The performance of the combined movement-speech decoder was 61 ± 9 % and 67 ± 7 % (covert and overt), but it is possible to have more classes available for control. The possibility of using covert speech for controlling a BCI was outlined; this is a step towards a multimodal BCI system for improved usability.

M3 - Article in proceeding

VL - 16

BT - International Conference on Rehabilitation Robotics (ICORR)

PB - IEEE

ER -

Tøttrup L, Hadsund JT, Kamavuako EN, Kæseler RL, Jochumsen M. Decoding covert speech for intuitive control of brain-computer interfaces based on single-trial EEG: a feasibility study. In International Conference on Rehabilitation Robotics (ICORR) . Vol. 16. IEEE. 2019