A Cross-Attention-Based Class Alignment Network for Cross-Subject EEG Classification in a Heterogeneous Space
Abstract
:1. Introduction
2. Related Works
2.1. Transfer Learning
2.2. Self-Attention Mechanism
3. Methods
3.1. Overview
3.2. Network Architecture
3.2.1. Generator
3.2.2. Cross-Encoder
3.2.3. Class Discriminator
3.2.4. Classifier
3.3. Training Procedure
4. Experiments
4.1. Dataset and Data Preprocessing
4.1.1. Dataset 2a of BCI Competition IV
4.1.2. Dataset 1 of BCI Competition IV
4.2. Experimental Settings
4.3. DA Scenarios
4.4. Baseline Comparison
4.5. Ablation Experiment
4.5.1. Ancillary Effects of Source Domains
4.5.2. Effect of Cross-Encoder
4.5.3. Effect of Class Discriminator
4.6. Time Complexity Analysis
4.7. Visualization
5. Discussion and Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Lance, B.J.; Kerick, S.E.; Ries, A.J.; Oie, K.S.; McDowell, K. Brain-computer interface technologies in the coming decades. Proc. IEEE 2012, 100, 1585–1599. [Google Scholar] [CrossRef]
- Wolpaw, J.R.; Birbaumer, N.; McFarl, D.J.; Pfurtscheller, G.; Vaughan, T.M. Brain-computer interfaces for communication and control. Clin. Neurophysiol. 2002, 113, 767–791. [Google Scholar] [CrossRef] [PubMed]
- Liu, Y.; Ding, Y.; Li, C.; Cheng, J.; Song, R.C.; Wan, F.; Chen, X. Multi-channel EEG-based emotion recognition via a multi-level features guided capsule network. Comput. Biol. Med. 2020, 123, 103927. [Google Scholar] [CrossRef]
- Islam, M.R.; Islam, M.M.; Rahman, M.M.; Mondal, C.; Singha, S.K.; Ahmad, M.; Awal, A.; Islam, M.S.; Moni, M.A. EEG channel correlation based model for emotion recognition. Comput. Biol. Med. 2021, 136, 104757. [Google Scholar] [CrossRef]
- Wen, Y.Z.; Zhang, Y.J.; Wen, L.; Cao, H.J.; Ai, G.P.; Gu, M.H.; Wang, P.J.; Chen, H.L. A 65 nm/0.448 mW EEG processor with parallel architecture SVM and lifting wavelet transform for high-performance and low-power epilepsy detection. Comput. Biol. Med. 2022, 144, 105366. [Google Scholar] [CrossRef]
- Oliva, J.T.; Rosa, J. Binary and multiclass classifiers based on multitaper spectral features for epilepsy detection. Biomed. Signal Process. Control 2021, 66, 102469. [Google Scholar] [CrossRef]
- Zhang, Y.J.; Ma, J.F.; Zhang, C.; Chang, R.S. Electrophysiological frequency domain analysis of driver passive fatigue under automated driving conditions. Sci. Rep. 2021, 11, 20348. [Google Scholar] [CrossRef] [PubMed]
- Min, J.L.; Xiong, C.; Zhang, Y.G.; Cai, M. Driver fatigue detection based on prefrontal EEG using multi-entropy measures and hybrid model. Biomed. Signal Process. Control 2021, 69, 102857. [Google Scholar] [CrossRef]
- Kim, K.-T.; Carlson, T.; Lee, S.-W. Design of a robotic wheelchair with a motor imagery based brain-computer interface. In Proceedings of the 2013 International Winter Workshop on Brain-Computer Interface, BCI, Gangwon, Republic of Korea, 18–20 February 2013; pp. 46–48. [Google Scholar]
- Krusienski, D.J.; Shih, J.J. Spectral components of the P300 speller response in and adjacent to the hippocampus. In Proceedings of the 2012 IEEE International Conference on Systems, Man, and Cybernetics, SMC, Seoul, Republic of Korea, 14–17 October 2012; pp. 274–277. [Google Scholar]
- Shi, M.H.; Zhou, C.L.; Xie, J.; Li, S.Z.; Hong, Q.Y.; Jiang, M.; Chao, F.; Ren, W.F.; Liu, X.Q.; Zhou, D.J.; et al. Electroencephalogram-based brain-computer interface for the Chinese spelling system: A survey. Front. Inf. Technol. Electron. Eng. 2018, 19, 423–436. [Google Scholar] [CrossRef]
- Sadeghi, S.; Maleki, A. Character encoding based on occurrence probability enhances the performance of SSVEP-based BCI spellers. Biomed. Signal Process. Control 2020, 58, 101888. [Google Scholar] [CrossRef]
- Chen, X.G.; Zhao, B.; Wang, Y.J.; Gao, X.R. Combination of high frequency SSVEP-based BCI and computer vision for controlling a robotic arm. J. Neural Eng. 2019, 16, 026012. [Google Scholar] [CrossRef] [PubMed]
- Chen, X.G.; Zhao, B.; Wang, Y.J.; Gao, X.R. Combination of augmented reality based brain-computer interface and computer vision for high-level control of a robotic arm. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 3140–3147. [Google Scholar] [CrossRef] [PubMed]
- Grosse-Wentrup, M.; Buss, M. Multiclass common spatial patterns and information theoretic feature extraction. IEEE. Trans. Biomed. Eng. 2008, 55, 1991–2000. [Google Scholar] [CrossRef] [PubMed]
- Ang, K.K.; Chin, Z.Y.; Zhang, H.; Guan, C. Filter bank common spatial pattern (FBCSP) in brain-computer interface. In Proceedings of the 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–8 June 2008; pp. 2390–2397. [Google Scholar]
- Kant, P.; Laskar, S.H.; Hazarika, J.; Mahamune, R. CWT based transfer learning for motor imagery classification for brain computer interfaces. J. Neurosci. Methods 2020, 345, 108886. [Google Scholar] [CrossRef] [PubMed]
- Bhattacharyya, A.; Singh, L.; Pachori, R.B. Fourier–Bessel series expansion based empirical wavelet transform for analysis of non-stationary signals. Digit. Signal Process. 2018, 78, 185–196. [Google Scholar] [CrossRef]
- Bhattacharyya, A.; Pachori, R.B. A multivariate approach for patient-specific EEG seizure detection using empirical wavelet transform. IEEE Trans. Biomed. Eng. 2017, 64, 2003–2015. [Google Scholar] [CrossRef]
- Chen, C.Y.; Wu, C.W.; Lin, C.T.; Chen, S.A. A novel classification method for motor imagery based on brain-computer interface. In Proceedings of the 2014 International Joint Conference on Neural Networks (IJCNN), Beijing, China, 6–11 July 2014; pp. 4099–4102. [Google Scholar]
- Fraiwan, L.; Lweesy, K.; Khasawneh, N.; Wenz, H.; Dickhaus, H. Automated sleep stage identification system based on time–frequency analysis of a single EEG channel and random forest classifier. Comput. Methods Programs Biomed. 2012, 108, 10–19. [Google Scholar] [CrossRef]
- Bishop, C.M. Pattern Recognition and Machine Learning; Springer: New York, NY, USA, 2006; Volume 2, pp. 645–678. [Google Scholar]
- Kousarrizi, M.R.N.; Ghanbari, A.A.; Teshnehlab, M.; Shorehdeli, M.A.; Gharaviri, A. Feature extraction and classification of EEG signals using wavelet transform, SVM and artificial neural networks for brain computer interfaces. In Proceedings of the 2009 International Joint Conference on Bioinformatics, Systems Biology and Intelligent Computing (IJCBS), Shanghai, China, 3–5 August 2009; pp. 352–355. [Google Scholar]
- Lawhern, V.J.; Solon, A.J.; Waytowich, N.R.; Gordon, S.M.; Hung, C.P.; Lance, B.J. EEGNet: A compact convolutional neural network for EEG-based brain–computer interfaces. J. Neural Eng. 2018, 15, 056013. [Google Scholar] [CrossRef]
- Schirrmeister, R.T.; Springenberg, J.T.; Fiederer, L.D.J.; Glasstetter, M.; Eggensperger, K.; Tangermann, M.; Hutter, F.; Burgard, W.; Ball, T. Deep learning with convolutional neural networks for EEG decoding and visualization. Hum. Brain Mapp. 2017, 38, 5391–5420. [Google Scholar] [CrossRef]
- Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
- He, H.; Wu, D. Transfer learning for brain–computer interfaces: A Euclidean space data alignment approach. IEEE Trans. Biomed. Eng. 2020, 67, 399–410. [Google Scholar] [CrossRef] [PubMed]
- He, H.; Wu, D. Different Set Domain Adaptation for Brain-Computer Interfaces: A Label Alignment Approach. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 1091–1108. [Google Scholar] [CrossRef] [PubMed]
- Wu, D.; Lawhern, V.J.; Hairston, W.D.; Lance, B.J. Switching EEG Headsets Made Easy: Reducing Offline Calibration Effort Using Active Weighted Adaptation Regularization. IEEE Trans. Neural Syst. Rehabil. Eng. 2016, 24, 1125–1137. [Google Scholar] [CrossRef] [PubMed]
- Wu, H.; Xie, Q.; Yu, Z.; Zhang, J.; Liu, S.; Long, J. Unsupervised heterogeneous domain adaptation for EEG classification. J. Neural Eng. 2024, 21, 046018. [Google Scholar] [CrossRef]
- Busto, P.P.; Gall, J. Open set domain adaptation. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 754–763. [Google Scholar]
- Saito, K.; Yamamoto, S.; Ushiku, Y.; Harada, T. Open set domain adaptation by backpropagation. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 153–168. [Google Scholar]
- You, K.; Long, M.; Cao, Z.; Wang, J.; Jordan, M.I. Universal domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 2720–2729. [Google Scholar]
- Jin, Y.M.; Luo, Y.D.; Zheng, W.L.; Lu, B.L. EEG-Based emotion recognition using domain adaptation network. In Proceedings of the 2017 International Conference on Orange Technologies, ICOT, Singapore, 8–10 December 2017; pp. 222–225. [Google Scholar]
- Hang, W.L.; Feng, W.; Du, R.Y.; Liang, S.; Chen, Y.; Wang, Q.; Liu, X.J. Cross-subject EEG signal recognition using deep domain adaptation network. IEEE Access 2019, 7, 128273–128282. [Google Scholar] [CrossRef]
- Chen, P.Y.; Gao, Z.K.; Yin, M.M.; Wu, J.L.; Ma, K.; Grebogi, C. Multiattention adaptation network for motor imagery recognition. IEEE Trans. Syst. Man Cybern. Syst. 2021, 52, 5127–5139. [Google Scholar] [CrossRef]
- Hong, X.L.; Zheng, Q.Q.; Liu, L.Y.; Chen, P.Y.; Ma, K.; Gao, Z.K.; Zheng, Y.F. Dynamic joint domain adaptation network for motor imagery classification. IEEE Trans. Neural Syst. Rehabil. Eng. 2021, 29, 556–565. [Google Scholar] [CrossRef]
- Zhao, H.; Zheng, Q.Q.; Ma, K.; Li, H.Q.; Zheng, Y.F. Deep representation based domain adaptation for nonstationary EEG classification. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 535–545. [Google Scholar] [CrossRef]
- Raza, H.; Cecotti, H.; Li, Y.H. Girijesh Prasad, Adaptive learning with covariate shift-detection for motor imagery-based brain–computer interface. Soft Comput. 2016, 20, 3085–3096. [Google Scholar] [CrossRef]
- Jeon, E.; Ko, W.; Suk, H. Domain adaptation with source selection for motor-imagery based BCI. In Proceedings of the 2019 7th International Winter Conference on Brain-Computer Interface, BCI, Gangwon, Republic of Korea, 18–20 February 2019; pp. 1–4. [Google Scholar]
- Tang, X.L.; Zhang, X.R. Conditional adversarial domain adaptation neural network for motor imagery EEG decoding. Entropy 2020, 22, 96. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017; Volume 30. [Google Scholar]
- Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; Liu, P.J. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 2020, 21, 1–67. [Google Scholar]
- Bozic, V.; Dordevic, D.; Coppola, D.; Thommes, J.; Singh, S.P. Rethinking Attention: Exploring Shallow Feed-Forward Neural Networks as an Alternative to Attention Layers in Transformers. arXiv 2023, arXiv:2311.10642. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 11–17 October 2021; pp. 9992–10002. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Zhang, D.; Qi, T.; Gao, J. Transformer-based image super-resolution and its lightweight. Multimed. Tools Appl. 2024, 83, 68625–68649. [Google Scholar] [CrossRef]
- Song, Y.; Zheng, Q.; Liu, B.; Gao, X. EEG conformer: Convolutional transformer for EEG decoding and visualization. IEEE Trans. Neural Syst. Rehabil. Eng. 2022, 31, 710–719. [Google Scholar] [CrossRef]
- Zhang, D.; Li, H.; Xie, J. MI-CAT: A transformer-based domain adaptation network for motor imagery classification. Neural Netw. 2023, 165, 451–462. [Google Scholar] [CrossRef]
- Li, H.; Zhang, D.; Xie, J. MI-DABAN: A dual-attention-based adversarial network for motor imagery classification. Comput. Biol. Med. 2023, 152, 106420. [Google Scholar] [CrossRef]
- Song, Y.; Zheng, Q.; Wang, Q.; Gao, X.; Heng, P.A. Global Adaptive Transformer for Cross-Subject Enhanced EEG Classification. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 2767–2777. [Google Scholar] [CrossRef]
- Brunner, C.; Leeb, R.; Müller-Putz, G.; Schlögl, A.; Pfurtscheller, G. BCI Competition 2008—Graz Data Set A; Technical Report; Institute for Knowledge Discovery, Laboratory of Brain-Computer Interfaces, Graz University of Technology: Graz, Austria, 2008; pp. 136–142. [Google Scholar]
- Blankertz, B.; Dornhege, G.; Krauledat, M.; Müller, K.R.; Curio, G. The non-invasive Berlin brain-computer interface: Fast acquisition of effective performance in untrained subjects. NeuroImage 2007, 37, 539–550. [Google Scholar] [CrossRef]
Scenario | No. of Classes | Features | Labels | No. of Classifications | Examples (Source → _target) |
---|---|---|---|---|---|
1 | 2 | same | partially different | 24 | |
2 | 3 | same | partially different | 12 | |
3 | 2 | same | completely different | 6 | |
4 | 3 | same | completely different | 12 | |
5 | 2 | different | completely different | 1 |
Scenario | Condition | A01 | A02 | A03 | A04 | A05 | A06 | A07 | A08 | A09 | Avg. | ||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
S | T | E&D | |||||||||||
✓ | ✓ | ✓ | 96.77 | 87.10 | 85.48 | 76.61 | 62.10 | 75.00 | 95.16 | 92.74 | 87.10 | 84.23 | |
1 | ✓ | ✓ | 94.36 | 88.71 | 80.65 | 75.00 | 59.68 | 68.55 | 97.58 | 82.26 | 82.29 | 81.45 | |
✓ | 87.10 | 82.26 | 86.29 | 73.39 | 61.29 | 76.61 | 96.77 | 86.29 | 83.87 | 81.54 | |||
✓ | ✓ | ✓ | 89.25 | 56.99 | 93.55 | 68.82 | 54.30 | 54.84 | 77.42 | 93.55 | 70.43 | 73.24 | |
2 | ✓ | ✓ | 83.33 | 51.61 | 87.63 | 59.14 | 55.38 | 54.84 | 82.80 | 90.32 | 71.50 | 70.73 | |
✓ | 80.65 | 44.62 | 87.63 | 52.15 | 53.22 | 55.38 | 75.27 | 87.10 | 76.88 | 68.10 | |||
✓ | ✓ | ✓ | 76.61 | 87.10 | 67.74 | 71.77 | 66.94 | 65.32 | 87.90 | 92.74 | 80.65 | 77.42 | |
3 | ✓ | ✓ | 76.61 | 87.90 | 75.81 | 66.94 | 66.94 | 66.13 | 87.10 | 89.52 | 75.81 | 76.97 | |
✓ | 74.19 | 89.52 | 77.42 | 79.03 | 56.45 | 63.71 | 82.26 | 90.32 | 83.07 | 77.33 | |||
✓ | ✓ | ✓ | 90.32 | 51.61 | 93.55 | 67.20 | 50.54 | 62.37 | 85.48 | 94.62 | 77.96 | 74.85 | |
4 | ✓ | ✓ | 83.33 | 51.61 | 87.63 | 59.14 | 55.38 | 54.84 | 82.80 | 90.32 | 71.50 | 70.73 | |
80.65 | 44.62 | 87.63 | 52.15 | 53.22 | 55.38 | 75.27 | 87.10 | 76.88 | 68.10 | ||||
✓ | ✓ | ✓ | 74.19 | 90.32 | 85.48 | 71.77 | 55.65 | 62.10 | 81.77 | 90.32 | 73.39 | 75.00 | |
5 | ✓ | ✓ | 76.61 | 87.90 | 75.81 | 66.94 | 66.94 | 66.13 | 87.10 | 89.52 | 75.81 | 76.97 | |
✓ | 74.19 | 89.52 | 77.42 | 79.03 | 56.45 | 63.71 | 82.26 | 90.32 | 83.07 | 77.33 |
Scenario | r | A01 | A02 | A03 | A04 | A05 | A06 | A07 | A08 | A09 | Avg. |
---|---|---|---|---|---|---|---|---|---|---|---|
2 | 97.58 | 66.13 | 98.39 | 84.68 | 70.16 | 68.55 | 95.96 | 96.77 | 97.58 | 86.20 | |
3 | 95.97 | 66.94 | 99.19 | 83.07 | 69.36 | 68.55 | 97.58 | 97.58 | 97.58 | 86.20 | |
1 | 4 | 96.77 | 69.35 | 97.58 | 83.87 | 70.96 | 66.93 | 96.77 | 99.19 | 96.77 | 86.47 |
5 | 95.97 | 66.94 | 97.58 | 83.87 | 70.97 | 66.94 | 96.77 | 97.58 | 97.58 | 86.02 | |
0 | 95.97 | 66.93 | 98.39 | 83.07 | 70.16 | 65.32 | 95.97 | 98.39 | 98.39 | 85.84 | |
2 | 90.32 | 68.28 | 87.63 | 57.53 | 55.91 | 61.29 | 84.95 | 84.95 | 79.03 | 74.43 | |
3 | 90.32 | 66.13 | 87.10 | 57.53 | 51.07 | 62.90 | 90.32 | 85.48 | 76.88 | 74.19 | |
4 | 4 | 89.25 | 68.28 | 87.63 | 56.99 | 55.91 | 60.21 | 86.02 | 83.87 | 75.27 | 73.72 |
5 | 89.25 | 65.05 | 86.56 | 55.91 | 55.91 | 62.90 | 84.95 | 85.48 | 81.18 | 74.13 | |
0 | 89.25 | 63.44 | 86.56 | 57.53 | 54.84 | 59.14 | 84.41 | 83.33 | 77.42 | 72.88 |
Scenario | Training Time (min) | Testing Time (ms) |
---|---|---|
1 | 23 | 5.64 |
2 | 36 | 5.58 |
3 | 23 | 6.04 |
4 | 37 | 5.60 |
5 | 31 | 5.79 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ma, S.; Zhang, D. A Cross-Attention-Based Class Alignment Network for Cross-Subject EEG Classification in a Heterogeneous Space. Sensors 2024, 24, 7080. https://doi.org/10.3390/s24217080
Ma S, Zhang D. A Cross-Attention-Based Class Alignment Network for Cross-Subject EEG Classification in a Heterogeneous Space. Sensors. 2024; 24(21):7080. https://doi.org/10.3390/s24217080
Chicago/Turabian StyleMa, Sufan, and Dongxiao Zhang. 2024. "A Cross-Attention-Based Class Alignment Network for Cross-Subject EEG Classification in a Heterogeneous Space" Sensors 24, no. 21: 7080. https://doi.org/10.3390/s24217080
APA StyleMa, S., & Zhang, D. (2024). A Cross-Attention-Based Class Alignment Network for Cross-Subject EEG Classification in a Heterogeneous Space. Sensors, 24(21), 7080. https://doi.org/10.3390/s24217080