TY - GEN
T1 - Decoding covert speech for intuitive control of brain-computer interfaces based on single-trial EEG
T2 - International Conference on Rehabilitation Robotics 2019 (ICORR 2019)
AU - Tøttrup, Lea
AU - Leerskov, Kasper
AU - Hadsund, Johannes Thorling
AU - Kamavuako, Ernest Nlandu
AU - Kæseler, Rasmus Leck
AU - Jochumsen, Mads
PY - 2019/6
Y1 - 2019/6
N2 - For individuals with severe motor deficiencies, controlling external devices such as robotic arms or wheelchairs can be challenging, as many devices require some degree of motor control to be operated, e.g. when controlled using a joystick. A brain-computer interface (BCI) relies only on signals from the brain and may be used as a controller instead of muscles. Motor imagery (MI) has been used in many studies as a control signal for BCIs. However, MI may not be suitable for all control purposes, and several people cannot obtain BCI control with MI. In this study, the aim was to investigate the feasibility of decoding covert speech from single-trial EEG and compare and combine it with MI. In seven healthy subjects, EEG was recorded with twenty-five channels during six different actions: Speaking three words (both covert and overt speech), two arm movements (both motor imagery and execution), and one idle class. Temporal and spectral features were derived from the epochs and classified with a random forest classifier. The average classification accuracy was 67 \pm 9 % and 75\pm 7 % for covert and overt speech, respectively; this was 5-10 % lower than the movement classification. The performance of the combined movement-speech decoder was 61 \pm 9 % and 67\pm 7 % (covert and overt), but it is possible to have more classes available for control. The possibility of using covert speech for controlling a BCI was outlined; this is a step towards a multimodal BCI system for improved usability.
AB - For individuals with severe motor deficiencies, controlling external devices such as robotic arms or wheelchairs can be challenging, as many devices require some degree of motor control to be operated, e.g. when controlled using a joystick. A brain-computer interface (BCI) relies only on signals from the brain and may be used as a controller instead of muscles. Motor imagery (MI) has been used in many studies as a control signal for BCIs. However, MI may not be suitable for all control purposes, and several people cannot obtain BCI control with MI. In this study, the aim was to investigate the feasibility of decoding covert speech from single-trial EEG and compare and combine it with MI. In seven healthy subjects, EEG was recorded with twenty-five channels during six different actions: Speaking three words (both covert and overt speech), two arm movements (both motor imagery and execution), and one idle class. Temporal and spectral features were derived from the epochs and classified with a random forest classifier. The average classification accuracy was 67 \pm 9 % and 75\pm 7 % for covert and overt speech, respectively; this was 5-10 % lower than the movement classification. The performance of the combined movement-speech decoder was 61 \pm 9 % and 67\pm 7 % (covert and overt), but it is possible to have more classes available for control. The possibility of using covert speech for controlling a BCI was outlined; this is a step towards a multimodal BCI system for improved usability.
UR - http://www.scopus.com/inward/record.url?scp=85071173748&partnerID=8YFLogxK
U2 - 10.1109/ICORR.2019.8779499
DO - 10.1109/ICORR.2019.8779499
M3 - Article in proceeding
T3 - I E E E International Conference on Rehabilitation Robotics. Proceedings
SP - 689
EP - 693
BT - 2019 International 16th Conference on Rehabilitation Robotics (ICORR)
PB - IEEE
Y2 - 24 June 2019 through 28 June 2019
ER -