TY - JOUR
T1 - Internet of Things meets brain-computer interface
T2 - a unified deep learning framework for enabling human-thing cognitive interactivity
AU - Zhang, Xiang
AU - Yao, Lina
AU - Zhang, Shuai
AU - Kanhere, Salil
AU - Sheng, Michael
AU - Liu, Yunhao
PY - 2019/4/1
Y1 - 2019/4/1
N2 - A brain-computer interface (BCI) acquires brain signals, analyzes, and translates them into commands that are relayed to actuation devices for carrying out desired actions. With the widespread connectivity of everyday devices realized by the advent of the Internet of Things (IoT), BCI can empower individuals to directly control objects such as smart home appliances or assistive robots, directly via their thoughts. However, realization of this vision is faced with a number of challenges, most importantly being the issue of accurately interpreting the intent of the individual from the raw brain signals that are often of low fidelity and subject to noise. Moreover, preprocessing brain signals and the subsequent feature engineering are both time-consuming and highly reliant on human domain expertise. To address the aforementioned issues, in this paper, we propose a unified deep learning-based framework that enables effective human-thing cognitive interactivity in order to bridge individuals and IoT objects. We design a reinforcement learning-based selective attention mechanism (SAM) to discover the distinctive features from the input brain signals. In addition, we propose a modified long short-term memory to distinguish the interdimensional information forwarded from the SAM. To evaluate the efficiency of the proposed framework, we conduct extensive real-world experiments and demonstrate that our model outperforms a number of competitive state-of-the-art baselines. Two practical real-time human-thing cognitive interaction applications are presented to validate the feasibility of our approach.
AB - A brain-computer interface (BCI) acquires brain signals, analyzes, and translates them into commands that are relayed to actuation devices for carrying out desired actions. With the widespread connectivity of everyday devices realized by the advent of the Internet of Things (IoT), BCI can empower individuals to directly control objects such as smart home appliances or assistive robots, directly via their thoughts. However, realization of this vision is faced with a number of challenges, most importantly being the issue of accurately interpreting the intent of the individual from the raw brain signals that are often of low fidelity and subject to noise. Moreover, preprocessing brain signals and the subsequent feature engineering are both time-consuming and highly reliant on human domain expertise. To address the aforementioned issues, in this paper, we propose a unified deep learning-based framework that enables effective human-thing cognitive interactivity in order to bridge individuals and IoT objects. We design a reinforcement learning-based selective attention mechanism (SAM) to discover the distinctive features from the input brain signals. In addition, we propose a modified long short-term memory to distinguish the interdimensional information forwarded from the SAM. To evaluate the efficiency of the proposed framework, we conduct extensive real-world experiments and demonstrate that our model outperforms a number of competitive state-of-the-art baselines. Two practical real-time human-thing cognitive interaction applications are presented to validate the feasibility of our approach.
KW - Brain-computer interface (BCI)
KW - Internet of Things (IoT)
KW - cognitive
KW - deep learning (DL)
UR - http://www.scopus.com/inward/record.url?scp=85055683076&partnerID=8YFLogxK
U2 - 10.1109/JIOT.2018.2877786
DO - 10.1109/JIOT.2018.2877786
M3 - Article
AN - SCOPUS:85055683076
SN - 2327-4662
VL - 6
SP - 2084
EP - 2092
JO - IEEE Internet of Things Journal
JF - IEEE Internet of Things Journal
IS - 2
M1 - 8506382
ER -