Classification for Motion Game Based on EEG Sensing

Due to the sensor technology and classification algorithm, the accuracy of the EEG and the motion signal based classification system are limited. And the motion sensor based interface system cannot show user’s mental activity such as engagement situation, tiredness state and so on, which are very important in education and medical care situations. In this paper, an openBCI and a Kinect sensor based motion classification system are proposed. The experiments results shown that the proposed method are out form the traditional motion or EEG based activity classification systems, and it is expected to develop a novel interactive device for the children and elders based on the integration algorithm.


Introduction
Brain-computer interfaces (BCI) technology enable users to control a device according to his or her neural activity.An important part of the BCI motion features have been used in many motion games by using body surface sensors, such as Microsoft Kinect, Leapmotion, Nintendo Wii and so on.However, the accuracy of the electroencephalography (EEG) and the motion signal based classification system are limited, due to the sensor technology and classification algorithm.Moreover, the motion sensor based interface system cannot show user's mental activity such as engagement situation (positive or negative), tiredness state and so on, which are very important in education and medical care situations.The motions also couldn't be always performed well for the children and elders [1].To settle this problem, BCI and motion technologies are both integrated together with follow two devices: (1) OpenBCI, which is an open source software framework for brain-computer interfaces.It is reported that the OpenBCI board could indeed be an effective alternative to traditional EEG amplifiers [2].(2) Microsoft Kinect, which is a motion sensing input device for video game of the Xbox 360 and windows PC.The infrared projector and camera and a special microchip are used to track the movement of objects and individuals in three dimensions, and interpret specific gestures.
In this paper, an openBCI and a Microsoft Kinect sensor based motion classification system are proposed.Moreover, the motion and EEG features are applied to train and test the SVM classifiers.
Using Kinect, the users need not be bothered with body sensors and that the system can save the users from wearing sensors that can be intrusive [3,4].The Application Programmer's Interface (API) was used to interface with the Kinect sensor.Its skeletal tracking software provides an estimate for the position of 20 anatomical landmarks (30 Hz, 640*480 pixels).The skeleton is described by the following joints: head, neck, right shoulder, left shoulder, right elbow, left elbow, right hand, left hand, torso, right hip, left hip, right knee, left knee, right foot, and left foot.The Kinect has the capacity to determine the position of the center of specific joints by using a fixed and rather simple human skeleton, which can be measured for joint motion.
To uniquely identify a vector, two angles are calculated for each of these vectors.
where θ Y is the angle between joint vector vi and positive X-axis and viy denotes the Y component of vi.θ YZ is the angle between the projection of the vector on XZ-plane and X-axis.A set of fourteen vectors from the twenty joint points are calculated for the input of the feature extraction of the motion signals.Some parts of the body are not considered in calculation (head, wrists, and feet) as they contribute little to classifier the movement.
The OpenBCI board can be used to collect 8 lead EEG signals by standard electrodes.Data were recorded from 3 children (5±1.5 years old) and 3 elders (60±5).Our prototype application on a desktop computers and set it up to project the activities on the whiteboard.The Kinect is mounted at the top of the whiteboard.Five movement massages are conducted in all experiment which include "left", "right", "up", "down", "rest".In order to record user's motions and faces in all the activities, a HD camera was mounted near the Kinect.The openBCI board is fixed on the back of the Electro-Cap and transmits data to the windows PC by Bluetooth module.

Overall structure
The raw signals of skeleton and EEG are processed following the proposed classification system shown in Figure 1.The overall system structure is composed of the following steps.Firstly, removing the noise from the raw signals of the skeleton and EEG by using the 1€ Filter [5], which use a first order low-pass filter with an adaptive cut-off frequency and suitable for a high precision and responsiveness task of the event-driven system.Secondly, filtered signals are segmented at a constant size by framing.Furthermore, the motion/EEG features are obtained from the segmented signals.As the Motion and EEG recording was annotated per second, each feature vector was computed based on one second of motion and EEG signals.Each second-wise annotation indicates both the movement situation (Good or NG) and engagement situation (positive or negative).Each one-second period are divided into 8 equal epochs.For each epoch there are 14*3 corresponding motion feature vectors and 7*3*8 corresponding EEG feature vectors are calculated.Concatenated the results to form 336-dimensional and 1344-dimensional feature vectors for each second considered.These feature vectors were then collected, along with their corresponding annotations, to train our classier.
Two frameworks are considered in our experiments: I.The motion and EEG features are used for the training and classification of movement and engagement, respectively (Figure 2a); II.The motion and EEG features are both used for the SVM training and classification of movement and engagement (Figure 2b).For the framework I, we consider that the motion and EEG features are suitable for the movement and engagement classifier task; Furthermore, a classification model considered integrating the motion and EEG features are built in framework II.
In evaluation scheme, the classification was performed independently on the snoring sounds recorded by microphones, and its performance was compared in terms of sensitivity, specificity and accuracy: where TP, TN, FP and FN are the number of true positive, true negative, false positive and false negative classified segments, respectively.Note that TP and TN refer to the number of correctly classification movement segments and the number of correctly classified positive engagement segments, respectively.

Results
In order to gain adequate results from the data of 3 children and 3 elders, proposed system is tested using four independent experiments, which include good movement, NG movement, positive engagement, and negative engagement tests.For the testing data, episodes of these four kinds of events were randomly extracted and manually annotated by using each of the recordings, and the data from the other subjects were used for training.
The proposed approaches were conducted using the datasets shown in Table 1.The data length for each class is given in the table, and the length in each cell was converted into frames using a frame size of 256ms, with a 50% overlap.The four experiments were performed independently.In addition, the training and testing data for each experiment were non-overlapped.The motion and EEG features for the experimental datasets were produced and applied to the SVM for training and classification.Motion has been recommended as the best feature for gesture detection [6,7].Therefore, several related works have used motion feature, and were able to obtain good performance.On the other hand, EEG feature shown the mental activity of the user.The confusion matrix is shown in Table 2.
Furthermore, the integrated features are applied to the movement and engagement SVM for training and classification according to the framework II.The confusion matrix is shown in Table 3.
For two frameworks, the most errors occurred between NG-Positive and Good-Negative.Integrated features achieved 97.4% and 94.4% accuracy for the Good-Positive, NG-Positive classification, while the accuracies for Good-Negative, and NG-Negative were 95.21% and 98.60%, respectively.Moreover, it can be seen from Tables 2 and 3 that the integrated features outperformed in movement and engagement classification task.Specifically, the motion and EEG based classification rates were 92.2%, 87.1%, 83.74% and 91.35%, for the Good-Positive, NG-Positive, Good-Negative, and NG-Negative, respectively.The classification results of three frameworks are shown in Table 4. Comparing the classifier performances for motion and EEG based features, it is evident that the performance of the framework II with the integrated features is superior in all experiments.

Conclusions
In this study, a simple and efficient movement and engagement classification system is present.Based on the motion and EEG feature and two SVM classifiers, proposed system outperforms the motionbased system.By using a low cost of the hardware platform, which can be installed in a children's home, nursing home, or clubs, the proposed system can be convenient to implement and allow the natural interface for the children.The motion features to guide automatic analysis and game designing.Moreover, other kinds of interactive technologies, such as speech recognition, will also be explored in the training system.

Table 1 .
The information of experimental datasets (seconds)

Table 2 .
The results using the Framework I

Table 3 .
The results using the Framework II