2015, Number 3
<< Back Next >>
Rev Mex Ing Biomed 2015; 36 (3)
Applying Brain Signals Sonification for Automatic Classification
González CEF, Torres-García AA, Reyes-García CA, Villaseñor-Pineda L
Language: Spanish
References: 24
Page: 233-247
PDF size: 709.44 Kb.
ABSTRACT
In recent years sonification of electroencephalograms (EEG) has been used as an alternative to analyze brain signals
after converting EEG to audio. In this paper we applied the sonification to EEG signals during the imagined speech
or unspoken speech, with the aim of improving the automatic classification of 5 words of Spanish. To check this, the
brain signals of 27 healthy subjects were processed. These sonificated signals were processed to extract features with
two different methods: discrete wavelet transform (DWT); and the Mel-frequencies cepstral coefficients (MFCC).
The latter commonly used in speech recognition tasks. To classify the signals three different classification algorithms
Naive Bayes (NB), Support Vector Machine (SVM) and Random Forest (RF) were applied. Results were obtained
using the 4 channels closest to the language areas of Broca and Wernicke, as well as the 14 channels of the EEG
device used. The percentages of average accuracy for the 27 subjects in the two sets of 4 and 14 channels using EEG
sonification were 55.83% and 64.14% respectively, which are improvements in the classification rates of the imagined
words compared with a scheme without sonification.
REFERENCES
[1] T. M. Rutkowski, A. Cichocki, and D. Mandic, “Information fusion for perceptual feedback: A brain activity sonification approach,” Signal Processing Techniques for Knowledge Extraction and Information Fusion, pp. 261–273, 2008.
[2] G. Marco-Zaccaria, “Sonification of EEG signals: A study on alpha band instantaneous coherence,” Master thesis, 2011.
[3] E. Miranda, A. Brouse, B. Boskamp, and H. Mullaney, “Plymouth braincomputer music interface project: Intelligent assistive technology for music-making,” International Computer Music Conference, 2005.
[4] J. Eaton and E. Miranda, “Realtime notation using brainwave control,” Sound and Music Computing Conference, 2013.
[5] G. Baier and T. Hermann, “The sonification of rhythms in human electroencephalogram.” International Conference on Auditory Display (ICAD), 2004.
[6] T. Hermann, P. Meinicke et al., “Sonification for EEG data analysis,” Proceedings of the 2002 International Conference on Auditory Display, 2002.
[7] G. Baier, T. Hermann, S. Sahle, and U. Stephani, “Event based sonification of EEG rhythms in real time,” Clinical Neurophysiology, vol. 118, no. 6, pp. 1377–1386, 2007.
[8] T. Hermann, G. Baier, U. Stephani, and H. Ritter, “Kernel regression mapping for vocal EEG sonification,” Proceedings of the International Conference on Auditory Display, 2008.
[9] M. Elgendi, J. Dauwels et al., “From auditory and visual to immersive neurofeedback: Application to diagnosis of alzheimers disease,” Neural Computation, Neural Devices, and Neural Prosthesis, pp. 63–97, 2014.
[10] M. Wester and T. Schultz, “Unspoken Speech - Speech Recognition Based On Electroencephalography,” Master’s thesis, Institut fur Theoretische Informatik Universitat Karlsruhe (TH), Karlsruhe, Germany, 2006.
[11] A. A. Torres-García, C. A. Reyes-García, and L. Villaseñor Pineda, “Toward a silent speech interface based on unspoken speech,” BIOSTEC - BIOSIGNALS, pp. 370–373, 2012.
[12] A. A. Torres-García, C. A. Reyes-García, and L. Villaseñor-Pineda, “Análisis de Señales Electroencefalográficas para la Clasificación de Habla Imaginada,” Revista Mexicana de Ingeniería Biomédica, vol. 34, no. 1, pp. 23– 39, 2013.
[13] E. F. González-Castañeda, A. A. Torres-García, C. A. Reyes-García, and L. Villaseñor-Pineda, “Sonificación de EEG para la clasificación de palabras no pronunciadas,” Research in Computing Science, vol. 74, pp. 61–72, 2014.
[14] C. Anderson, “Sonification - Brain Computer Interfaces Laboratory,” Department of Commputer Science, Colorado State University, www.cs.colostate.edu/eeg/main/ projects/sonification, 2005.
[15] A. A. Torres-Garcia, “Clasificación de palabras no pronunciadas presentes en Electroencefalogramas (EEG),” Tesis de Maestría, 2011.
[16] Xuemin, Chi and Hagedorn, John and others, “EEG-based discrimination of imagined speech Phonemes,” International Journal of Bioelectromagnetism, vol. 13, no. 4, pp. 201–206, 2011.
[17] X. Pei, D. L. Barbour, E. C. Leuthardt, and G. Schalk, “Decoding vowels and consonants in spoken and imagined words using electrocorticographic signals in humans,” Journal of neural engineering, vol. 8, no. 4, p. 046028, 2011.
[18] J. Mourino, J. del R Millan et al., “Spatial filtering in the training process of a brain computer interface,” Engineering in Medicine and Biology Society. Proceedings of the 23rd Annual International Conference of the IEEE, vol. 1, pp. 639–642, 2001.
[19] M. A. Pinsky, Introduction to Fourier analysis and wavelets. Amer Mathematical Society, 2002, vol. 102.
[20] X. Hu, H. Zhang et al., “Isolated word speech recognition system based on FPGA,” Journal of Computers, vol. 8, no. 12, pp. 3216–3222, 2013.
[21] R. Kohavi et al., “A study of crossvalidation and bootstrap for accuracy estimation and model selection,” IJCAI, vol. 14, no. 2, pp. 1137–1145, 1995.
[22] G. H. John and P. Langley, “Estimating continuous distributions in bayesian classifiers,” in Proceedings of the Eleventh conference on Uncertainty in artificial intelligence. Morgan Kaufmann Publishers Inc., 1995, pp. 338–345.
[23] J. C. Platt, “Fast training of support vector machines using sequential minimal optimization,” in Advances in kernel methods. MIT press, 1999, pp. 185–208.
[24] L. Rokach, Pattern Classification Using Ensemble Methods. World Scientific, 2009.