Abstract
Decoding language from non-invasive brain signals is crucial in building widely applicable brain-computer interfaces (BCIs). However, most of the existing studies have focused on discriminating which one in two stimuli corresponds to the given brain image, which is far from directly generating text from neural activities. To move towards this, we first propose two neural decoding tasks with incremental difficulty. The first and simpler task is to predict a word given a brain image and a context, which is the first step towards text generation. And the second and more difficult one is to directly generate text from a given brain image and a prefix. Furthermore, to address the two tasks, we propose a general approach that leverages the powerful pre-trained encoder-decoder model to predict a word or generate a text fragment. Our model achieves 18.20% and 7.95% top-1 accuracy in a vocabulary of more than 2,000 words on average across all participants on the two tasks respectively, significantly outperforming their strong baselines. These results demonstrate the feasibility to directly generate text from neural activities in a non-invasive way. Hopefully, our work can promote practical non-invasive neural language decoders a step further.
| Original language | English |
|---|---|
| Publication status | Published - Dec 2021 |
| Externally published | Yes |
| Event | NeurIPS 2021 AI for Science Workshop: Mind the Gaps - Duration: 6 Dec 2021 → 14 Dec 2021 https://ai4sciencecommunity.github.io/neurips21.html |
Workshop
| Workshop | NeurIPS 2021 AI for Science Workshop: Mind the Gaps |
|---|---|
| Period | 6/12/21 → 14/12/21 |
| Internet address |